FREE SUBSCRIPTION Includes: The Advisor Daily eBlast + Exclusive Content + Professional Network Membership: JOIN NOW LOGIN
Skip Navigation LinksHome / Blogs / Read Blog

Print

AI in Microsoft Office – “I want a new drug!”

August 30, 2023, 07:00 AM

On July 16th Microsoft announced that AI Copilots would be available for Office365 subscribers for the low-low price of just $30 more per month. That’s $30/mo. on top of a full Office365 license of $36/mo. Microsoft Office is one of the greatest productivity tools of all time. Could AI Copilots really be worth 85% of the full Office suite?

MTV made me a huge Huey Lewis fan - in my humble opinion Huey and The News were in rare company acting out their music. When I read one of the first summaries of the Microsoft Office AI product announcement I heard that great wind-up guitar rift from one of their greatest hits and videos: I want a new drug!

Equipment Finance article on AI Microsoft Office by Scott Nelson of Tamarack

Companies like KPMG, Lumen, and Emirates NBD have all had access. “We’re learning that the more   customers use Copilot, the more their enthusiasm for Copilot grows,” says Microsoft’s Head of Consumer Product Marketing Yusuf Mehdi. “Soon, no one will want to work without it.”

Sprinkle a little “addictive AI” on top of one the greatest productivity tools of all time and there you have it – “A new drug.” After I fired up YouTube and watched Huey run through the video storyline four or five times in that great 1980’s hair and red suit, I stopped to contemplate this new revelation on AI. Since the release of ChatGPT4 and Microsoft’s announcement of its partnership with OpenAI in January 2023, AI has dominated the news – technical, business, and mass media. I wanted to see if any of them had anticipated this drug-like quality of AI.

The mass media, of course, led with fear because nothing generates views and clicks like experts forecasting the end of mankind. The mass media got a huge boost from Hollywood when writers and actors pivoted from the economic challenges of television and movie studio disintermediation by streaming to being replaced by AI agents generating scripts and visuals of favorite characters. When I thought about it, the mass media seem to believe AI will be better at human endeavors than humans and that producers could become addicted to the speed and cost with which AI can operate.

The business news is also intrigued by the possibilities of AI replacing workers, thus reducing labor costs, so they have been bouncing back and forth between “you had better get on the AI band wagon” to “be careful, AI is still risky” to “here come the new AI regulations!” From my perspective the business news has been caught up in the mass media fear-mongering and missed an important distinction between Generative AI like ChatGPT and Operational AI like the recommendation agents that have been driving ecommerce and streaming selection algorithms for years. Operational AI uses prediction machines to improve workflow outcomes and efficiency simultaneously thereby improving business productivity with the next generation of automation. Operational AI doesn’t have the ethical and social baggage of generative AI. As a form of automation AI presents the same familiar challenges business leaders have had balancing stakeholder interests since Henry Ford revolutionized the auto industry with it in the 1920’s. So AI definitely has promise, but no indications of addiction yet other than lower costs are hard to ignore.

That leaves the technology news sources, who have generally stayed true to their assignment and have focused on figuring out where AI can work, where it is risky, and how exactly should these new tools work. Like their mass media and business counterparts, however, they have focused on Generative AI, primarily ChatGPT. I know multiple software engineers who have reached out to coding forums to find ways to use ChatGPT to help them write code. Writing code is often so constrained that the ChatGPT algorithms are able to quickly provide working code solutions. None of these people are thinking that ChatGPT is going to become sentient and unleash the Terminator, but coders are finding practical applications of the generative technology. But others have also identified challenges.

The “SoDak Governors Blog” asked ChatGPT the relatively simple question of “write a blog post about the youngest South Dakota governor.” The result showed that the algorithms appear to have been written with a priority on answering the question rather than finding the correct answer. ChatGPT invented, from whole digital cloth, a fictitious figure complete with a capital-ready portrait of a fictitious person. Maybe the writers and actors should be worried. When answering a question is a more important purpose of the LLM than finding the truth, fiction can be the result.

The recent study of ChatGPT’s learning capabilities by Stanford focused on “How do these LLM’ learn?” The answer is “We don’t know, but not very well.”  The researchers asked the AI agent about if the number 17077 was prime. In March 2023 the tool was correct about 98% of the time. But six months later the tool learned how to answer the question correctly only about 2% of the time. Neither the researchers nor programmers have yet explained why, but “unlearning” is definitely a concern.

Perhaps more useful to those of us putting AI to use are the insights into an “AI-driven medical documentation service” that uses natural language processing (NLP) to “listen” to patient-physician conversations and then do routine documentation for the medical records. The big take away from this article is that the AI company describes that their AI listening agents are accurate about 80% of the time and that they have 200 contractors who also listen to the conversation recording and “double check” the AI. So much for AI putting everyone out of work. Listening is done in standard time so it's hard to imagine that the AI is improving the speed of the process much at all.

When it’s all said and done, I do think AI is like a new drug. Some are going to use it frequently, perhaps even recreationally, to help them perform better as the authors of ideas, articles, and power point presentations. These users will gladly pay Microsoft an extra $30/mo. to feel more creative and productive. Some of these users may get carried away and cross the line on copyright or visage infringement. Regulations are certainly going to try to deal with these “drug abusers.”

But most of us are going to see this new drug as medicine. Medicine that will improve both personal and enterprise productivity. Operational AI is already improving how businesses operate and consumers behave. If the mission of the enterprise contributes to the betterment of customers and society, AI can be a medicine that, like penicillin, will improve the well-being of all of us.

Scott Nelson
President and Chief Digital Officer | Tamarack Technology
Scott Nelson is the President & Chief Digital Officer of Tamarack Technology. He is an expert in technology strategy and development including AI and automation as well as an industry expert in equipment finance. Nelson leads the company’s efforts to expand its impact on the industry through innovation using new technologies and digital transformation strategies. In his dual role at Tamarack, Nelson is responsible for the company’s vision and strategic planning as well as business operations across professional services and Tamarack’s suite of AI products. He has more than 30 years of strategic technology development, deployment, and design thinking experience working with both entrepreneurs and Fortune 500 companies.
Comments From Our Members

You must be an Equipment Finance Advisor member to post comments. Login or Join Now.