Happy Friday! For this installment in our weekly feature, The Future in Five Questions, we examine the role of ethics in a profitable tech enterprise with Kathy Baxter — the principal architect of Salesforce’s ethical AI practice. Salesforce’s software is ubiquitous in American business, with its customer relationship management (CRM) services embedded in over 150,000 American companies — and it’s Baxter’s job to try and keep that technology fair.

Read on to hear her thoughts about the hazards of emotion recognition, responsive AI regulation and what makes for good business.

Responses have been edited for length and clarity.

What’s one underrated big idea?

Responsible AI. We need a lot more companies recognizing that responsible AI isn’t just a regulatory need — it’s good business. DataRobot released a survey of more than 350 U.S. and U.K.-based business executives and development leads earlier this year who use or plan to use AI. And the results were pretty shocking. They found that 36 percent of respondents had suffered losses due to AI bias. Those respondents lost revenue, customers and employees plus incurred legal fees.

Even if the kind of AI you’re building doesn’t fall under regulations, it’s just good business to know whether the data that you’re using to train and build your models is representative of everyone it impacts. Is there systemic bias in it? Are you making predictions or decisions that are harmful to any group? This is something every company should be investing in.

What’s a technology you think is overhyped? 

Emotion recognition or emotion detection. Research over the last decade has demonstrated that automated emotion recognition or detection is biased and inaccurate. It tends to be based on pseudo science. So when it’s used for consequential decision making — like employee monitoring, surveillance, or trying to determine if someone is lying — real harm can occur.

We saw over the summer that Microsoft announced they’re removing emotion recognition features from their facial recognition technology. That’s huge. Earlier in 2020, Google also said that they were blocking any new AI feature to analyze emotions, because of fears of cultural insensitivity. Then IBM also deprecated their tone analyzer. We’re seeing major tech companies that have been working in this technology for years say, “This is potentially harmful. It’s not equally accurate for everyone,” and stepping away from it.

What book most shaped your conception of the future?

I read Ursula K. Le Guin’s “The Left Hand of Darkness” in college. Having grown up in the rural south, it just was mind blowing for me.

The concept is that the main character visits a planet where the people change gender throughout their lifetimes. Imagine: if we were all able to truly experience what it’s like to be a different gender, or a different race, or from a completely different culture in your life. It would result in laws that are much more empathetic. We would have a better functioning society. Resources would be distributed more equitably. So how do we bring this into technology? How do we create technology that can be empathetic and equally accurate for everyone?

What could government be doing regarding tech that it isn’t?

Nearly every government in the world is trying to craft regulations or best practices to help mitigate the harms of AI. But AI or ML is many different kinds of technologies, applied in many different kinds of contexts. It is an incredibly holistic technology.

And we’re still debating the definitions of some of its foundational concepts. What do we mean when we say AI? What do we mean when we say something is biased? How do we know if something is actually explainable or interpretable? Trying to come up with standards for when something is safe enough for widespread use — it’s just incredibly difficult to do.

So we need more collaborations between government, civil society and industry, so that everybody is up to date. They should know what research can be applied to regulation-making for an industry.

For the next year, I’m a visiting AI fellow at NIST to help develop a playbook for their AI risk management framework, so that practitioners like me know how to use it.

What has surprised you most this year?

I had expected more AI regulation this year. But we are making progress, with the AI Bill of Rights, the New York City AI bias law and the proposed EU AI Act. So progress is being made — I would always like for it to be a little bit faster.

Again, it’s incredibly difficult to do, but I just can’t stress enough the importance of governmental collaboration and information sharing. We have to stop arguing around definitions. Okay, maybe something isn’t a perfect definition, but it’s the one we’re agreeing on. And let’s move forward with that.

At Digital Future Daily, we don’t have quite a precise term of art to describe the disparate, future-shaping technologies that fall under our editorial umbrella — AI, blockchain, quantum computing, VR and the metaverse, among others.

The European Union, however, is characteristically ahead of the eight-ball on this. Today its innovation ministers approved an agenda for its strategy on “deep tech,” where it wants to avoid re-creating the frequently-hostile dynamic that exists between the EU and the American tech giants of the current innovation era. POLITICO’s Pieter Haeck reported for Pro subscribers in July, when the plan was proposed, on the thinking behind that effort, largely aimed at drawing the kind of startup investment that’s made Silicon Valley so flush with cash.

Whether that will work remains to be seen. But I’m interested in asking: How do you, the reader, mentally describe, or categorize, the tech we cover here at DFD? With all respect to our friends in Brussels, “deep tech” doesn’t really do it for me. Email me at [email protected] with your suggestions. — Derek Robertson

Last week, I wrote in POLITICO Magazine about the conservative politics of Elon Musk’s Twitter takeover and how they fit into the GOP’s Trump-era philosophical evolution.

Surprise, surprise: As Musk has used the platform to restore his definition of “balance” to the “digital town square,” mostly by bringing back a series of previously-banned right-wing accounts and loosening censorship, elected Republicans have come out of the woodwork to cheer him. POLITICO’s Rebecca Kern reported on Musk’s praise chorus in Congress this morning, with a few choice quotes:

  • “Every time we can add someone of Mr. Musk’s intellect to the Republican Party, I’d do an old-man backflip,” Sen. John Kennedy (R-La.)  said.
  • “The government’s going to go after someone who wants to have free speech?”, House Minority Leader Kevin McCarthy said on Tuesday after President Joe Biden called for a review of foreign investors’ role in Musk’s purchase. “I think they should stop picking on Elon Musk.”
  • And one tech industry executive who requested anonymity noted the obvious, saying “It looks like Republicans might go a little easier on Twitter because they like decisions that Musk is making.”

Musk has repeatedly protested that he’s not necessarily right-wing, but simply desires to curb what he sees as excessive liberal governance at the company. Whatever his internal politics, the inscrutable billionaire is getting a quick study in Washington politics: The enemy of your enemy is your friend. — Derek Robertson

Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.