For years, AI has promised to change the world. 2023 may prove to be the year in which it did. Hyperbolic? Perhaps, but artificial intelligence is passing all kinds of milestones, whether in terms of technical capabilities, mainstream awareness, or even—in an otherwise depressed tech industry—funding. Spurred by continued advances in fields like deep learning, combined with the advent of new tools like large language models and associated generative AI, artificial intelligence is truly enjoying its day in the sun.
Inventing tomorrow’s pharmaceuticals? Outperforming law school graduates on the bar exam? Creating prize-winning artwork? Check, check, and check. “We’re lucky to witness it, to be alive at a time like this,” says Irene Bratsis, author of The AI Product Manager’s Handbook.
Kevin Gattanella, author of Artificial Intelligence: The Future is Here and Now, attributes the “recent surge” of AI development to the convergence of two factors: a substantial increase in hardware processing power and the widespread availability of large datasets for “training” AI models. “These factors have revolutionized the field of AI, enabling researchers to explore complex algorithms and models with unprecedented efficiency,” says Gattanella.
Not everything is rosy. The AI revolution has reinjected age-old fears about the possibilities of machines replacing human labor, but now in a manner that threatens jobs once considered safe from automation. Then there are more nuanced, but no less critical, concerns, such as the lack of inspectability of machine learning models which (at least up until now) have made it challenging to “lift the lid” and look inside.
Wherever you sit on the skepticism/optimism gradient, there’s no doubt that AI—and its creators—are enjoying their highest profile position in years. Who are the primary movers and shakers in this field? Read on for the top 10 companies to keep an eye on.
OpenAI: Building Superintelligent AI
For all the mainstream attention that artificial intelligence has received over the years, very few dedicated AI companies have entered the mainstream consciousness. OpenAI represents a vanishingly rare exception to this rule.
Established in 2015 by Elon Musk, Sam Altman, Ilya Sutskever, and others, OpenAI is a non-profit research lab dedicated to the mission of building “safe and beneficial” artificial general intelligence. That’s the idea of a strong or “general” intelligence able to accomplish any intellectual task performed by humans. Backed by huge names like Microsoft, and carrying a current valuation in the vicinity of $28 billion, OpenAI’s biggest innovations to date have included the image-generating DALL-E deep learning model and of course the text-generating, large language model ChatGPT.
Of these, ChatGPT has made the biggest waves; reaching one million user sign-ups in its first five days alone and was quickly baked into Microsoft’s Bing search engine as a possible Google killer (or, at least, maimer).
Recently, OpenAI debuted GPT-4, its latest large language model—and its most impressive one so far. “Besides its ability to engage in sophisticated reasoning, GPT-4 is also multimodal,” says Tom Taulli, author of Generative AI: How ChatGPT and Other AI Tools Will Revolutionize Business. “For example, it can process an image for text and then do an analysis on it. This capability is hugely important.”
As to what’s next, the promise of true artificial general intelligence apparently beckons. OpenAI has been frank about its predictions that AI will exceed human intelligence sometime this decade. The company has put together a team, including co-founder and chief scientist Ilya Sutskever and others, for “steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”
Founding figure Elon Musk seemingly isn’t too happy, though. After stepping down from the OpenAI board several years ago, Musk has now launched his own apparently rival company, X.AI, of which he is the director.
Cohere: Opening Generative AI For Business
Founded in Toronto in 2019, machine learning company Cohere has soared to its present valuation of $2.1 billion on its ambitions of building generative machine learning tools for the enterprise market. Sound a little, well, dull? Technologically, it’s anything but boring. Cohere is creating similar generative AI tools to the likes of OpenAI, only with the express mission statement that these will be for business use-cases, rather than primarily user-facing ones.
Although OpenAI is now starting to pursue enterprise customers, Cohere has the first-mover advantage in this all-important (and potentially very lucrative) domain. Since its tools will most likely find themselves baked into all manners of programs in the coming months and years, that means there’s a good chance you’ll wind up using Cohere’s technology—even if you don’t explicitly realize it.
Select Cohere use-cases include AI models designed to power chatbots, such as the popular customer support chatbot Ada; models for penning AI-written blog posts and articles; AI for online content moderation. In a depressed economy, in which competitive companies are trying to do more than ever with less, Cohere’s promises about the extreme productivity gains potential of generative AI could well make it a “must-have.”
It’s certainly made it popular with big-name supporters, including superstars like “father or AI” Geoff Hinton and Stanford’s Fei-Fei Li, along with Nvidia, Oracle, and Salesforce Ventures. It even recently brokered a deal with management consulting giant McKinsey.
As for names working at Cohere itself, there are assorted luminaries from Google, YouTube, and other tech giants. One co-founder, Aidan Gomez, was among the authors of the breakthrough 2017 research paper titled, “Attention is All You Need,” which described the revolutionary transformer deep learning architecture. Transformers (the “T” in ChatGPT) mean a major decrease in training time and costs for building gigantic machine learning models, and helped to usher in the present era of large language models (LLMs) capable of generating original text.
Just in case you were doubting Cohere’s proximity to the cutting edge of AI.
Character.ai: Fancy Chatting With an AI
Fancy a chinwag with Elon Musk? What about a casual chat with former President Donald Trump, a one-on-one tutorial from Albert Einstein, or a collaborative reasoning session with Sherlock Holmes? Character.ai is a startup that uses generative AI technology to build chatbots for achieving exactly that.
The concept of chatbots, software that’s able to participate in human-like conversation, has been around for decades. But while at times impressive (a 1960s computational psychotherapist named ELIZA disconcerted even its creator in its effectiveness), more often than not these have been simplistic tools used for functional purposes like triaging customer support queries.
Character.ai represents the next generation of AI chatbot, not only able to “understand” complex queries and offer appropriate answers, but also able to draw upon vast troves of information. It’s even possible to fine-tune their parameters to give them distinct personalities.
The founders of Character.ai were previously employed at Google, where they worked on the company’s LaMDA (“Language Model for Dialogue Applications”) project, best known for causing the departure of a Google engineer who was convinced of the AI’s sentience. What they have essentially done is open this technology up to the world—allowing users to create their own bots and then share them with the community so that others can take them for a spin. As a result, the company’s website offers a smorgasbord of homespun chatbots—ranging from the aforementioned simulated famous individuals to tools for practice job interviews to interview gaming characters (think an AI “dungeon master” from Dungeons & Dragons). Investors have been just as keen to engage as users, helping drum up more than $150 million of funding thus far.
A more cohesive form will probably emerge for all of this over time. However, it’s impossible not to see the potential in all this, despite occasionally veering into Black Mirror territory.
Google DeepMind: Google’s AI Brains Trust
When the then-largely unknown DeepMind was snapped up by Google (now Alphabet) in early 2014 for a price estimated between $400 million and $650 million, it signaled to outsiders just how excited the tech world was—and remains—about deep learning.
An early DeepMind triumph was a demonstration of an AI agent capable of mastering old Atari games with minimal human input. This relied on advances in a field known as reinforcement learning, a kind of AI behaviorism that teaches agents to take actions based on the maximizing of rewards. DeepMind later made waves with AlphaGo, a gameplaying AI that defeated the 18-time world champion Lee Sedol at Go, one of the world’s most complex strategy board games.
But DeepMind isn’t on this list just because of its impressive legacy. Its publicly available AlphaFold is an AI system able to predict a protein’s 3D structure based on its amino acid sequence, holding enormous potential for research in human health fields. Then there are present investigations into using AI to control the nuclear fusion plasma in a tokamak reactor with deep reinforcement learning, attempts to leverage AI to create more natural-sounding artificial speech, and more. At this July’s 40th International Conference on Machine Learning, DeepMind presented around 80 research papers covering everything from superior AI performance in long-term reasoning tasks to ways in which machine learning models can help better train “embodied agents” such as robots.
Currently, much of the excitement around the company is focused on a language AI model called Gemini that will build on DeepMind’s previous reinforcement learning research to supposedly solve some of the problems current large language models still struggle with, such as planning and problem-solving.
Recently, DeepMind merged with Google AI’s Google Brain division to unify and accelerate the search giant’s focus on artificial intelligence. From the look of things, those ambitions extend far beyond providing users with better search results.
Fiddler AI: Explaining the Unexplainable
AI models are a bit like representative democracy. With limited time and, sometimes, expertise, we place our faith in other entities to make decisions on our behalf. In the case of AI, that’s everything from language translation and financial fraud detection to disease diagnosis and the steering of self-driving cars. Where both politicians and AI potentially fall down is the issue of trust. If we no longer believe that decisions are being made fairly, consistently, and accurately, the benefit of having an external decision-maker becomes more liability than asset.
This is where explainable AI enters the picture. As miraculous as machine learning models can seem, they also remain “black boxed” and inscrutable. That means that, while deep neural networks are recognizable approximations of the way that the human brain works, we can’t (or haven’t previously been able to) unpack exactly how artificial neurons reach their final conclusions. Computer scientists can say whether a model works (for example, can it pick every picture of a dog out of an image set of miscellaneous animal pictures), but not how it works (meaning that we don’t know exactly what it’s singling out when defining a dog).
Understanding this secretive middle bit between inputs (data) and outputs (answers) is important. And it’s a problem that Fiddler AI is working hard to solve—developing tools that answer the all-important “how” and “why” questions often so opaque in AI decision-making. That includes features that let you see how particular data regions are affecting machine learning models and then make tweaks to either minimize or maximize their overall influence.
It’s an area that has crucial implications for everything from ethical concerns about fairness and bias to the more bottom line-oriented issue of quickly alerting engineers when their machine learning models suffer degraded performance. As AI becomes less of a novelty and more an expected part of our lives, the ability of companies to satisfy both regulators and customers by being able to understand—and explain—each prediction made by models will grow increasingly important. Will Fiddler manage to become the dominant player in this field? That much is still unclear. However, it’s certainly helping tackle a problem we’re only going to hear more about in the years to come.
Midjourney: Creating Tomorrow’s Artwork
With all the recent excitement about text-generating AI like ChatGPT, it’s easy to momentarily forget another advance in the world of generative artificial intelligence in the form of machine-created artwork. Like the large language models that generate text, these transformer-based generative tools let users enter a written prompt (think “Elon Musk and Mark Zuckerberg do bare-chested battle in a cage, rendered in the pop art style of Roy Lichtenstein”) and allow the AI to take a crack at creating a finished product.
Various startups and tools have already popped up in the generative art space, with DALL-E and Stable Diffusion being two notable examples. However, AI research lab Midjourney is helping to lead the pack with its impressively detailed and realistic image-generation platform, initially entering open beta back in July 2022. Since then, Midjourney has continued to iterate and develop its offering, releasing frequent version updates to optimize its algorithmic creation engine.
Any discussion of artwork quality necessarily enters subjective territory. But the fact that an image generated using Midjourney recently won first place at the Colorado State Fair’s fine art competition, generating reams of publicity in the process, suggests that some creative Rubicon may have been crossed.
Unsurprisingly, AI-generated artwork has proven a thorny topic. From broader concerns that AI will replace human artists to more focused complaints like copyright infringement, due to the scraping of web images for use as AI training data, Midjourney and its compatriots aren’t—to put it mildly—always viewed benevolently by the artistic community. Whether this could potentially disrupt its business model remains to be seen. On the flip side of the coin, high-quality generative AI makes it easier for anyone to create bespoke images and can offer a creative aid by assisting with the brainstorming of ideas. The art of “prompt engineering” (the writing of a well-crafted description of an image to create unique, exciting images) is certainly an intriguing new development; calling for human creativity to help clarify and explain exactly the kind of image the AI should conjure up.
Nvidia: Chips to Power the AI Revolution
It is sometimes observed that, during a gold rush, the truly profitable ones are those that are selling the shovels. While that grossly undersells the complexity of what chipmaker Nvidia has achieved, it doesn’t ring entirely false, either. For years, Nvidia has been best known as a hardware enabler for the gaming industry due to its development of cutting-edge graphics chips.
But its leadership team, including founder and long-time CEO Jensen Huang, spotted the perfect opportunity for expansion by hitching its graphics processing units (GPUs) wagon to the AI boom. As it turns out, the kind of chips able to handle the complex, simultaneous calculations for computer graphics also turn out to be helpful for the heavy duty math needed in machine learning. This made them a “must have” for data centers around the world, with innovations like Nvidia’s new, powerful (and, at around, $40,000 per unit, expensive) H100 processor perfectly timed to surf the wave of generative AI. Nvidia’s strategic bet has left rivals such as Intel and AMD in the comparative dust. Today, some estimates suggest that Nvidia owns an astronomical 95% of the GPU market for machine learning.
Being in the right place, at the right time, with the right products has helped Nvidia this year soar past the vaunted $1 trillion market cap, putting it in extremely rarefied air among the world’s biggest and most powerful companies. Jensen Huang is now one of the world’s richest individuals thanks to his position in the company. In short, the current AI revolution runs on Nvidia hardware.
It’s not just shovel-selling that makes Nvidia an AI force to be reckoned with, however. It has been cementing its position atop the AI world with no shortage of smart software research as well. Recently Nvidia made a $50 million investment in Recursion Pharma to train AI machines for use in drug discovery.
Insilico Medicine: Creating Tomorrow’s Pharmaceuticals
Some companies in the AI space innovate by creating amazing technological infrastructure, such as next-generation algorithms. Others focus less on the research side of things, but instead on applying existing technology to critical real-world problems. Insilico Medicine does both. This Hong Kong-based biotech company has been working toward its mission of using AI for drug discovery since 2014.
Given how time-consuming and expensive classical drug discovery is, the idea that artificial intelligence could be used as part of the development process is one that researchers, clinicians and, yes, investors have been excited about for many years. Among other possible advances, AI can help analyze large volumes of data to identify possible drug candidates with higher levels of accuracy and speed.
The challenge is that, even if AI can help develop futuristic drugs, the high tech mantra of “move fast and break things” doesn’t fit wholly comfortably in the world of medicine, with its stringent safety and efficacy requirements. As a result, especially in a depressed economy, many startups in this sphere face a kind of “biotechnology winter” in which they risk running out of funding long before they can reach the point of creating anything.
Early mover Insilico Medicine appears not to have this problem. Having raised upward of $400 million, it’s seemingly cash-rich, and bringing in revenue through partnerships with various pharmaceutical companies, including China-based Fosun Pharma and French multinational pharmaceutical company Sanofi, for the use of its AI platforms. It’s employing a range of generative AI technologies to create novel molecular structures with desired properties, and is tackling a wide range of medical problems including cancer, fibrosis, autoimmune diseases, and more. Since 2021, Insilico has announced at least 12 preclinical drug candidates, meaning drugs with enough supporting evidence to be considered for human testing. Of these, three have so far advanced to human clinical trials and, as revealed in June, one such drug—billed as being the world’s first anti-fibrotic small molecule inhibitor designed using generative AI—has now graduated to Phase II clinical trials.
Hugging Face: The World’s AI Library
You know how Reddit is sometimes described as the “front page of the internet”? Think of Hugging Face as the proverbial front page of the open-source (free to use) machine learning community.
If Nvidia is powering the hardware end of many of the AI models the world is relying upon, then Hugging Face is providing the tools for building these machine learning applications. Founded in 2016 by three French entrepreneurs, Hugging Face was initially an attempt to create a chatbot app for teenagers. (The company’s unusual name comes from the “hugging face” emoji popular across social media.) However, one big pivot later, and it has developed into a powerful GitHub-type library for open-source machine learning resources. Which, based on the current AI boom period, is seemingly everyone.
Like a combination tool box and helpful friend, Hugging Face resources include all the demos, models, and datasets necessary to start carrying out tasks like getting an AI to recognize specific objects in an image or generating text. Because many of these tasks are tough to implement (let alone optimize), the presence of established libraries makes learning, experimentation, and implementation significantly easier and faster.
As well as offering the necessary code to build, train, and deploy these open-source ML models, Hugging Face also represents a community of sorts for the likes of data scientists and machine learning engineers to gather, share ideas, and make contributions to a growing number of projects.
With upward of 200,000 daily users and thousands of organizations using its resources to better integrate AI capabilities into their workflow and products, Hugging Face is filling a critical role in the development of modern AI. At a time when more companies than ever are getting involved with AI, and a growing number of practitioners are working remotely, Hugging Face therefore addresses a serious need in the marketplace.
Shield AI: Autonomous Pilots for the Military
Shield claims to have built the “world’s best AI pilot.” Does it live up to that considerable hype? (And for those who are wondering how many people are trying to build AI pilots, the answer is “probably more than you think.”) The U.S. Air Force, U.S. Army, and Brazil Armed Forces certainly seem supportive of Shield’s efforts. They’re all clients of the $2.3 billion company, which uses AI to power drones, co-pilot fighter planes such as F-16 fighter jets, and assist with added autonomy for other aerospace and defense technologies.
Co-founded by a former Navy SEAL and an MIT alum, Shield certainly has the right pedigree when it comes to the areas it’s working in. While it’s hardly building everyday, consumer-facing tech, its creations certainly have the potential to impact our lives more than any other technology on this list—even if we might not be immediately aware of it.
Shield’s small-unmanned aircraft system, Nova, was the inaugural AI-enabled drone deployed for the purposes of defense in the history of the United States. Meanwhile, its Hivemind autonomy stack can be used to create swarms of autonomous, AI-piloted drones or bring AI smarts to F-16 fighters. Although the company would certainly never claim to replace highly skilled military personnel, the hope of its autonomous AI systems is that it could potentially stop human pilots from being put in harm’s way unnecessarily. Instead, Shield’s AI systems can react to battlefield environments without the need for GPS, communications, previous knowledge of a location or scenario, or even necessarily a human support pilot physically present in the cockpit.
Earlier this year, Shield AI signed a “memorandum of understanding” agreement with Boeing to explore possibly strategic collaboration in the “areas of autonomous capabilities and artificial intelligence on current and future defense programs.”
Expect to see plenty more innovation—and, based on defense spending, allotted budgets—in this area over the coming years.