Friday, September 13, 2019

Let me voice my concern here

I’ve been reading a lot lately about the technical “improvements” that are enabling deepfake videos. These videos are getting slicker and slicker, to the degree that it’s nearly impossible to detect that they’re fakes.

As if we don’t have enough disinformation floating around out there – and as if politicians don’t actually say enough unbelievable things already (forget the bad actor sitting at the Resolute Desk in the Oval Office, Sharpie in hand; has anyone caught Boris Johnson lately?) – but just the idea of all  these seemingly credible videos out there further undermining our fragile polity. Oh, no, Mr. Bill.

And. Now. This.

Thieves used voice-mimicking software to imitate a company executive’s speech and dupe his subordinate into sending hundreds of thousands of dollars to a secret account, the company’s insurer said, in a remarkable case that some researchers are calling one of the world’s first publicly reported artificial-intelligence heists. (Source: WaPo)

This happened at a British energy company, where last March a pretty senior employee – senior enough to have big-buck authorization - believing that he was on the horn with his even more senior boss, went ahead and wired – as requested – $240K to an account in Hungary. (Hungary? Is Hungary the new Russia? The new Nigeria?)

The employee admitted that he had found the request “rather strange”…”but the voice was so lifelike that he felt he had no choice but to comply.”

Our voices are pretty much like fingerprints.

Oh, there can be near duplicates.

A couple of weeks ago, I went to the Zac Brown Band concert at Fenway. (Excellent, by the way, except for the head-banging metal encores. I didn’t like loud music like this when I was young, let alone as I’ve gotten older – even with my hearing slightly diminished. Blessedly, the bulk of the concert was ZBB’s great combo of country, ballad, Jimmy Buffet…They even covered James Taylor’s Sweet Baby James.)

We didn’t know ahead of time who the opening act was, and as I walked in with my sister and niece, we heard a very familiar voice. A voice that sounded just like Willie Nelson. No, it wasn’t Willie. It was his son Lukas, who sounds exactly like his old man. Only he’s really good looking. (Sorry Willie.) And quite good. I will be buying his band’s CD.

Then there’s me and my sisters. We don’t look alike, but our voices (and coughs) are remarkably similar. When we would call my mother, we would sometimes string her along, not giving her an immediate hint about which one of us she was talking to.

And we’ve all heard impersonators who sound uncannily like the person they’re imitating.

But we all recognize voices – sometimes even more quickly than we do faces.

Years ago, when Boston Celtic Reggie Lewis died suddenly, my brother Tom, out on the West Coast with the news on, looked up when he recognized the voice of the doctor who had declared Lewis dead as she was interviewed on national TV. That doctor was the daughter of my mother’s closest friend, and we’d all known her forever. She’d even been Tom’s grammar school classmate. He’d known right away that the voice was that of Mickey McGinn.

Anyway, voice faking software – “ultra-realistic voice cloning” - is coming out of biggies like Google and start ups alike. This sofware:

…can copy the rhythms and intonations of a person’s voice and be used to produce convincing speech.

And they’re out there for free.

As always with emerging technology, there are noble uses:

Developers of the technology have pointed to its positive uses, saying it can help humanize automated phone systems and help mute people speak again.

Yes, and more likely to be used for nefarious purposes.

The Hungarian swindle is one of at least three instances that Symantec has found of executive’s voices being mimicked to swindle companies.

Lyrebird, an AI startup that’s unleashed one of these voice faking apps, noting that the technology is inevitable – which is so true – has this to say in its ethics statement:

“Imagine that we had decided not to release this technology at all. Others would develop it and who knows if their intentions would be as sincere as ours.”

The only way to stop a bad guy with voice faking technology is a good guy with voice faking technology? Or is it the other way around?

The technology is still not fully refined:

But in some cases, thieves have employed methods to explain the quirks away, saying the fake audio’s background noises, glitchy sounds or delayed responses are actually due to the speaker being in an elevator, in a car or in a rush to the next flight.

The scammers are also savvy about who to go after – those with the authority to wire money off ASAP – and they create a sense of urgency: have to have it now! – that makes someone more likely to take care of the request ASAP. Hey, it’s Mr. Big, and he needs a quarter-of-a-mill to go to Hungary. Here you go.

After their first energy company scam was successful, the scammers called back again. This time, the employer called his boss. And while he was on the phone with his boss, the fake boss called back. Bad timing!

Google and other AI developers are “working to build systems that can detect and combat fake audio, but the voice-mimicking technology is evolving rapidly.” And, of course, they’re the same folks who are developing the voice faking technology to begin with.

“There’s a tension in the commercial space between wanting to make the best product and considering the bad applications that product could have,” said Charlotte Stanton, the director of the Silicon Valley office of the think tank Carnegie Endowment for International Peace.

Sort of like Sackler/Purdue Pharma unleashing OxyContin on an unsuspecting world, and then turn around and develop an antidote.

“Researchers need to be more cautious as they release technology as powerful as voice-synthesis technology, because clearly it’s at a point where it can be misused."

Oy!

I hate to admit it, but sometimes I wouldn’t mind going back to the good old days. No, not the bad ones. But maybe if we could have a decade back, knowing what we know now…

No comments: