Tuesday, February 26, 2019

Write on?

Perfect. Sounds good. Works for me!

If you don’t respond to some of your emails – the ones that seem appropriate for a short “smart reply” – gmail waits a few days, and then lights things up with a big, bold hint that you just might want to reply. And even supplies a few handy suggestions, saving you the trouble of typing in a few top-of-head reply.

Thanks! I’ll look into it! I’m on it!

Pithy. Terse. To the point!

All you need to do is click on the response of your choice, and you’ve answered that email without having to do so much as put metaphorical pen to metaphorical paper. No thinking required. Just the doing of a mouse click.

Easy does it.

I haven’t “taken advantage” of it yet, and I find it pretty annoying – what writer wants words being put in her mouth? - but this is just the beginning. Forget about three word replies. We’re not all that far from artificially intelligenced long forms.

It seems that OpenAI (a non-profit - cofounded by the likes of Elon Musk, Peter Thiel, and Reid Hoffman – and doing research on Artificial Intelligence) has come up with a way to generate “convincing, well-written text.”

In keeping with its charter -

Discovering and enacting the path to safe [emphasis mine] artificial general intelligence.

- OpenAI is growing concerned about the potential for abuse.

OpenAI said its new natural language model, GPT-2, was trained to predict the next word in a sample of 40 gigabytes of internet text. The end result was the system generating text that “adapts to the style and content of the conditioning text,” allowing the user to “generate realistic and coherent continuations about a topic of their choosing.” The model is a vast improvement on the first version by producing longer text with greater coherence.

But with every good application of the system, such as bots capable of better dialog and better speech recognition, the non-profit found several more, like generating fake news, impersonating people, or automating abusive or spam comments on social media. (Source: Tech Crunch)

If you’ve used online support, you may have already encountered one of the good bots, as AI (natural language processing, machine learning, etc. – I have a client in this arena, and it’s pretty interesting) is deployed in many self-service customer service apps.

Anyway, OpenAI provides an example of a bad use of their technology, in which a bot could engage in Facebook and Twitter wars. The case they use to illustrate their point is AI making the argument that recycling is a “major contributor to global warming.”

Easy to see that bad bots could be out there making all sorts of reasonably sounding arguments for all sorts of bad ideas. We’ve seen how conspiracy theories and Russian (human) bots can cause all sorts of trouble (even when those Russian bots often speak fractured English). Just wait until there’s no human intervention required.

So OpenAI has put the brakes on their release, putting just a subset of their technology out there. For now.

We need more fake news and bad actors flaming around on social media like we need a bigger hole under the Polar icecap. Bad enough when you need a human or near human (think Alex Jones) to spew nonsense. When the crap can be generated automatically, well, caveat everyone everywhere about everything. Throw in the increasingly sophisticated photoshopping and video editing techniques available, and god knows what we’re going to see out there.

Scary stuff, for sure.

And I guess it’s just a matter of time before there’s software to write blog posts, short stories, novels…

O, brave new world that has such technology in it.

No comments: