Home Technology Responsible technology use in the AI age

Responsible technology use in the AI age

by News7

The sudden appearance of application-ready generative AI tools over the last year has confronted us with challenging social and ethical questions. Visions of how this technology could deeply alter the ways we work, learn, and live have also accelerated conversations—and breathless media headlines—about how and whether these technologies can be responsibly used.

Responsible technology use, of course, is nothing new. The term encompasses a broad range of concerns, from the bias that might be hidden inside algorithms, to the data privacy rights of the users of an application, to the environmental impacts of a new way of work. Rebecca Parsons, CTO emerita at the technology consultancy Thoughtworks, collects all of these concerns under “building an equitable tech future,” where, as new technology is deployed, its benefits are equally shared. “As technology becomes more important in significant aspects of people’s lives,” she says, “we want to think of a future where the tech works right for everyone.”

Technology use often goes wrong, Parsons notes, “because we’re too focused on either our own ideas of what good looks like or on one particular audience as opposed to a broader audience.” That may look like an app developer building only for an imagined customer who shares his geography, education, and affluence, or a product team that doesn’t consider what damage a malicious actor could wreak in their ecosystem. “We think people are going to use my product the way I intend them to use my product, to solve the problem I intend for them to solve in the way I intend for them to solve it,” says Parsons. “But that’s not what happens when things get out in the real world.”

AI, of course, poses some distinct social and ethical challenges. Some of the technology’s unique challenges are inherent in the way that AI works: its statistical rather than deterministic nature, its identification and perpetuation of patterns from past data (thus reinforcing existing biases), and its lack of awareness about what it doesn’t know (resulting in hallucinations). And some of its challenges stem from what AI’s creators and users themselves don’t know: the unexamined bodies of data underlying AI models, the limited explainability of AI outputs, and the technology’s ability to deceive users into treating it as a reasoning human intelligence.

Parsons believes, however, that AI has not changed responsible tech so much as it has brought some of its problems into a new focus. Concepts of intellectual property, for example, date back hundreds of years, but the rise of large language models (LLMs) has posed new questions about what constitutes fair use when a machine can be trained to emulate a writer’s voice or an artist’s style. “It’s not responsible tech if you’re violating somebody’s intellectual property, but thinking about that was a whole lot more straightforward before we had LLMs,” she says.

The principles developed over many decades of responsible technology work still remain relevant during this transition. Transparency, privacy and security, thoughtful regulation, attention to societal and environmental impacts, and enabling wider participation via diversity and accessibility initiatives remain the keys to making technology work toward human good.

MIT Technology Review Insights’ 2023 report with Thoughtworks, “The state of responsible technology,” found that executives are taking these considerations seriously. Seventy-three percent of business leaders surveyed, for example, agreed that responsible technology use will come to be as important as business and financial considerations when making technology decisions. 

This AI moment, however, may represent a unique opportunity to overcome barriers that have previously stalled responsible technology work. Lack of senior management awareness (cited by 52% of those surveyed as a top barrier to adopting responsible practices) is certainly less of a concern today: savvy executives are quickly becoming fluent in this new technology and are continually reminded of its potential consequences, failures, and societal harms.

The other top barriers cited were organizational resistance to change (46%) and internal competing priorities (46%). Organizations that have realigned themselves behind a clear AI strategy, and who understand its industry-altering potential, may be able to overcome this inertia and indecision as well. At this singular moment of disruption, when AI provides both the tools and motivation to redesign many of the ways in which we work and live, we can fold responsible technology principles into that transition—if we choose to.

For her part, Parsons is deeply optimistic about humans’ ability to harness AI for good, and to work around its limitations with common-sense guidelines and well-designed processes with human guardrails. “As technologists, we just get so focused on the problem we’re trying to solve and how we’re trying to solve it,” she says. “And all responsible tech is really about is lifting your head up, and looking around, and seeing who else might be in the world with me.”

To read more about Thoughtworks’ analysis and recommendations on responsible technology, visit its Looking Glass 2024.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Source : Technology Review

You may also like

Responsible technology use in the AI age - Responsible technology use in the AI age * Responsible technology use in the AI age | Responsible technology use in the AI age | Responsible technology use in the AI age | Responsible technology use in the AI age | | Responsible technology use in the AI age | | Responsible technology use in the AI age | Responsible technology use in the AI age

news7.asia Responsible technology use in the AI age