The World’s AI Companies Are Killing Trust in the Technology

“An AI tool is only as ethical as the developers who created it and the source material upon which it was trained. You, the human in the loop, are the harbinger of ethical AI usage.”

Ethical AII was scrolling my LinkedIn feed recently and noticed a former associate had posted that they had achieved certification in “AI ethics” from one of the world’s largest technology companies. I’ve noticed this term becoming more ubiquitous lately, and it’s puzzling.

Ethical according to whom? Ethical compared to what? Whose ethical code are we using to determine whether a given technology is ethical? By what standards do we measure whether an AI-generated image, song, article, thought piece, or other assets are “ethical?”

There’s a simple answer: Ethical AI is a fad. It may look great in a press release, but the reality is that it lacks substance. It’s nothing more than a marketing term designed to make buyers feel safe investing in this technology. And why, you might ask, do companies — specifically financial institutions in our case — that seek to enable the use of AI feel unsafe or reticent when it comes to investing in AI solutions? There are a few reasons.

Trust Killers

Sure, AI technology is nascent and changes daily, leaving security teams with inadequate time to vet these tools’ vulnerabilities. But ultimately, regulated industries are hesitant to use AI solutions as they are seen as black boxes with limited transparency, making validation for regulatory compliance hard and cumbersome. When it comes to developing and implementing ethical Gen AI solutions, companies at the forefront of this new breed of AI technology have proven unable and unwilling to self-regulate effectively and provide the transparency regulated industries require.

Without transparency, there can be no accountability, and without accountability, assurances of the ethical use of AI are meaningless. That’s where sensible regulation can immediately make a difference.

Without government regulations (like the EU AI Act, for example) that aim to provide guidance, guardrails, and accountability to ensure consumers are not impacted by tech companies launching hastily trained, poorly tested AI models into production, we’re fed a steady stream of news that looks like this:

Headlines like these are trust killers.

Large corporations like Google, Microsoft, Meta, and OpenAI are driven by their eagerness and ambition to dominate the market in the rapidly expanding trillion-dollar AI industry. This zeal often leads them to take shortcuts in their development processes. As a result, when these giants fail to meet their lofty AI promises, it’s not just their reputation that suffers. Startups, small businesses, and emerging tech companies that invest in developing genuinely smart, safe, and ethically trained AI products bear the brunt of the fallout. These responsible entities must navigate the extensive reputational damage inflicted by the failures of larger firms, which often results in more frequent, costly, and embarrassing headlines.

Ethical AI is Our Job

I’ve worked in technology long enough to understand hype cycles, and we’re certainly in the midst of one with Gen AI. When every company is clamoring for attention in the new world of AI, terms like “ethical AI” and “responsible AI” play on fear rather than value. “You’re scared of AI,” these messages seem to call out, “but if you use our product, there’s no need to be afraid.” Tech marketers should leave the fearmongering to politicians and stick to the facts.

AI ethics as a service is bananas. If a company is promising “ethical AI,” run away. An AI tool is only as ethical as the developers who created it and the source material upon which it was trained. You, the human in the loop, are the harbinger of ethical AI usage.

So, when you look to partner with a firm offering something akin to “ethical” or “responsible” AI, just remember that’s your job. Rather than seeking certification from one of these companies that cannot be trusted to self-regulate, perhaps spend that time doing a deep dive on the term “irony” instead.

Image Source: Deposit Photos
Image ID: 6496641
Copyright: stuartmiles 

Share

Warning & Disclaimer: The pages, articles and comments on IPWatchdog.com do not constitute legal advice, nor do they create any attorney-client relationship. The articles published express the personal opinion and views of the author as of the time of publication and should not be attributed to the author’s employer, clients or the sponsors of IPWatchdog.com.

Join the Discussion

4 comments so far. Add my comment.

  • [Avatar for Anon]
    Anon
    May 14, 2024 09:35 am

    Tim,

    Some may view the desire to have an “ethics bar” to be just a bit self-serving and aiming instead to control entry into the field.

    Given the ubiquitous nature (and ready and random ability and availability) for software engineering, you are NOT likely to succeed – even if you could surmount the natural skepticism of creating any type of peer-accountability.

  • [Avatar for Anon]
    Anon
    May 11, 2024 08:59 pm

    Pro Say – please update your understanding of US Copyright law.

  • [Avatar for Tim]
    Tim
    May 11, 2024 07:12 am

    I hate marketing whitewash too, but you’re painting with a wide brush. “AI Ethics” is an academic discipline, with research and papers and legal requirements and professors arguing, and I studied it under a full professor at a major university. It was a lot like “Engineering Ethics” which I took decades ago. I am sure there are bad versions of this class, but don’t discard the real discipline for those.

    In my view the problem is that there wasn’t much discussion of ethics in computer science until the past decade, and there still is no mandatory licensing for computer professionals so there is no public accountability. From a public policy standpoint the solution to unethical engineers and lawyers wasn’t for customers to learn materials science and how to file legal briefs, it was to require that all practitioners act ethically and be accountable to their peers.

  • [Avatar for Pro Say]
    Pro Say
    May 10, 2024 01:11 pm

    For any AI relying in any way on the unpaid-for creative work of others, “Ethical AI” is an oxymoron. “Certifications” don’t magically convert a wrong into a right.

Add Comment

Your email address will not be published. Required fields are marked *