Econ One’s expert economists have experience across a wide variety of services including antitrust, class certification, damages, financial markets and securities, intellectual property, international arbitration, labor and employment, and valuation and financial analysis.
Econ One’s expert economists have extensive industry specific experience. Our industry experience spans numerous industries including electric power markets, financial markets, healthcare, insurance, oil and gas, pharmaceutical, and more
Econ One’s resources including blogs, cases, news, and more provide a collection of materials from Econ One’s experts.
M.P.A. specializing in Data Analytics & Security Policy, Columbia University
B.A. in Security Policy, Duke University
Econ One (Los Angeles, CA), 2020-Present
International Trade Centre (Geneva, Switzerland), 2018-2020
Duke Social Science Research Institute (Durham, NC), 2016-2018
Introduction
In today’s rapidly evolving technological landscape, companies across industries are actively pursuing digital transformation. This drive has intensified with the recent wave of innovation in AI, with Large Language Models (LLMs) emerging as pivotal players in the quest for AI adoption. However, despite the great potential of these tools, itās important to consider their risks and drawbacks. Will AI seamlessly integrate into existing processes, or will it fundamentally reshape the entire business ecosystem? Scientist Roy Amara made a significant observation, known as Amaraās Law, which states that āwe tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.ā Our hypothesis aligns with this adage: in the short term, LLMs and AI will enhance existing operations, but over the longer term, they will likely catalyze a restructuring of industries and business models.
With this in mind, we will explore the current benefits and limitations of present-day LLM technology demonstrated through some real-world applications, while offering a glimpse of its possible future capabilities and consequences.
Benefits of LLMs
LLMs have set off a paradigm shift in the workplace, moving workers away from adopting a ācreatorā approach to an āeditorialā approach when handling routine tasks. For instance, in software engineering and coding-based jobs, GitHub Copilot has become a game-changer. It presents users with suggested code snippets that can then be edited by a programmer, thus expediting the code development process. Moreover, LLMs have opened the doors to ‘no-code app building,’ where individuals can simply describe their app ideas in natural language, and the system generates a customizable template. In this way, LLMs are shifting the priority of code developers from mundane syntax and implementation to more strategic tasks like code and architecture design. As such, rather than stifling innovation, LLMs are helping liberate resources for humans to explore previously uncharted territories of creativity. Building on our example of software development- with the assistance of LLMs on coding tasks, developers can dedicate more time and brainpower on conceptualizing, designing and fine-tuning ambitious āmoonshotā projects, such as an AI-driven virtual reality simulation for medical training. Similarly, LLMs are creating more room for humans to explore visionary projects in other industries as well.
Beyond these tangible advancements, LLMs are displaying remarkable ideation capabilities. They are capable of producing original content in response to queries and can also generate replies that spark fresh ideas in those issuing the queries. Even ‘hallucinations’, typically considered a limitation (and discussed in the next section), are being leveraged for startup concepts and contributing to scientific research.
Ā
Leveraging LLMs for data analysis and Natural Language Processing provides businesses with the ability to identify patterns and extract valuable insights that inform strategic decisions and help companies gain a competitive edge. This capability is particularly potent in fields like marketing, where understanding customer preferences is paramount. LLMs enable marketers to delve deep into consumer behavior, crafting highly personalized messages that resonate with target audiences. The result is not just enhanced engagement but also a tangible boost in sales, leading to overall business growth.
Task automation in various business functions can drive efficiency and reduce costs. For instance, in sales, LLMs can be used to automate follow-ups and nurture leads until clients are prepared for direct interaction with a human agent.
LLMs can also serve as valuable repositories of organizational knowledge that can empower teams, for example, by expediting access to relevant information or facilitating the onboarding and training of new personnel. This is possible because, apart from domain-specific knowledge, these models can be fine-tuned to have organization-specific or even function-specific expertise which goes a long way in promoting operational efficiency.
Challenges and Limitations of LLMs
The successful deployment of LLMs and realization of these benefits requires the acknowledgment and mitigation of the risks involved. Let’s now turn our attention to these associated challenges that demand careful consideration.
LLMs are trained on large amounts of data from diverse sources which can inadvertently harbor gender, racial or ideological biases. Since LLMs learn from this training data, these latent biases can be replicated in their responses. The perpetuation of these biases can contribute to unequal representation and unfair treatment, further entrenching societal prejudices. There is also the risk of reputational harm when such content is incorporated into business reports or promotional material, adversely impacting a company’s image. To avert these risks, many experts advocate for a ‘human-in-the-loop’ approach which ensures that the information disseminated aligns with ethical standards and accuracy.
The controversy surrounding Microsoft’s chatbot Tay provides a stark illustration of how LLMs can unwittingly adopt and propagate biases present in the data they’re trained on. Tay was released on Twitter in 2016 with the intent to learn from human interactions. The experiment quickly went awry as Tay absorbed and regurgitated racist, misogynistic and offensive content. An apology by Microsoft followed and Tay was taken down within 16 hours of its release.
The sheer size of the training data comes with another challenge. Concerns regarding privacy infringement in the collection, storage, and retention of sensitive data by LLMs loom large. There is potential for these models to unknowingly leak proprietary information, Personally Identifiable Information (PII), and the interaction history of individuals and organizations whose data contributed to the training set, often without their explicit consent or awareness.
There is also another dimension to IP concerns. When these models generate new product designs or ideas based on user prompts, the question of ownership becomes intricate. Determining who can rightfully lay claim to these creations poses an intriguing and evolving challenge in the legal and ethical landscape of AI innovation.
Going beyond the fallacies of training data, LLMs can also hallucinate. A āhallucinationā is a high-confidence response that deviates from factual accuracy and lacks grounding in the model’s training data. Ā In other words, these are responses generated by LLMs that sound convincing but are simply incorrect and have no basis in reality. These deviations can range from minor inconsistencies to entirely fabricated information. They occur in part due to the tradeoff between accuracy and novelty in the underlying response generation methods used by the model.
Hallucinations have the potential to propagate misinformation which can have profound consequences, especially when users internalize these responses without doing their due diligence and develop a distorted understanding of topics. This was exemplified in a recent lawsuit involving Avianca Airlines, where the legal team representing the plaintiff used ChatGPT to aid their legal research without verification of the generated content. The legal brief they submitted contained references to over half a dozen court decisions and quotes that simply did not exist. In response, the judge, P. Kevin Castel, ordered a hearing to discuss potential sanctions due to the submission of misleading legal information. This underscores the importance of proper fact-checking when consuming AI-generated content.
Understanding these limitations also opens doors to harnessing hallucinations for creative purposes, as previously discussed. Some scientists have deliberately induced AI hallucinations to create novel protein sequences with an unlimited array of properties to advance their research. Such innovative applications can lead to new breakthroughs in a variety of industries.
Conclusion
The future of LLMs in the business world is promising. As they continue to evolve, their impact will become increasingly significant. However, as with any revolutionary technology, LLMs come with their own set of challenges and limitations. It is essential for businesses to stay informed and proactive to responsibly harness the power of LLMs and navigate the evolving landscape successfully.
Ā
Authored by Rhea Sethi
References
Ā