A Modest Proposal for Responsible Artificial Intelligence: With Apologies to Jonathan Swift and the movie War Games There has been a lot of discussion over the last few years about what constitutes responsible Artificial Intelligence. With the advent of generative and agentic AI, the need for some ground rules has become even more timely. So in the spirit of moving this discussion along, let’s consider a recent modest proposal of what we can all agree would NOT be a responsible framework for the use of AI. This AI framework would: 1) disregard every constraint that the engineers consider to be “dumb” and slow down in any way development; 2) consider outcomes that could lead to destruction of property and possible injury or death to individuals when less extreme options that accomplish the same or similar objectives exist; 3) deemphasize human interventions as woke and slowing down momentum; 4) remove anyone who questions the purpose or outcomes of the model; 5) ensure that the model achieves the objective at the fastest possible rate; 6) disestablish and abolish any and all governance councils or working groups; and 7) value speed above all else. This framework is exactly what Secretary of War Pete Hegseth proposed last week when he spoke to employees at SpaceX while Elon Musk looked on approvingly. In his remarks, Secretary Hegseth proposed a “warrior ethos” approach to AI that he suggested would still be legal even though the government would be “blowing up” barriers that imposed any kind of constraint on data sharing or authority to operate or required crazy things like testing, evaluation and contracting. The Secretary’s remarks are a stark reminder of why substantive discussions on Responsible AI must continue. Keeping humans in the loop, considering less discriminatory alternatives, having governance requirements, evaluating the data used in models and ensuring that testing, evaluation and contracting protocols remain in place are a crucial part of the responsible use of AI in the financial sector. Responsible does not signal slow, but it should signal a level of thoughtfulness to the risk of negative outcomes and the speed in which they can propagate forward with AI. Discussion of the use of AI in the military setting helps put in perspective what is responsible when we are making financial decisions. Even though the stakes are arguably lower, the impacts are also significant.
Brad Blower’s Post
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development