top of page

Seen and Unseen: The AI You Know, The AI You Don't

Writer: Matt FergusonMatt Ferguson

While corporate America carries out its work of finding new ways to put artificial intelligence and large language models in places they shouldn’t be, the utility of discriminative AI goes largely unheralded in the public consciousness. Companies like Microsoft and Google are spending billions marketing generative AI. Generative AI deals with the creation of new data that resembles existing data (for instance, creating a painting which is derivative of an existing work). Discriminative AI, meanwhile, deals with the classification and recognition of existing data. Discriminative models rely on supervised learning, where datasets fed into the model are labeled and in which each data point corresponds to a label or category. Discriminative AI models are used in applications such as facial recognition, spam email filtering, image recognition, and sentiment analysis. 


Generative AI, meanwhile, is employed in the creation of realistic images and videos, new musical compositions, and text generation (for example, writing an essay or email on behalf of a human user). Chatbots are based on a specific type of AI known as large language modeling (LLM). Large language models are trained on massive text datasets and are particularly adept at translation, text generation, text summarization, and conversational question-answering. Google Gemini, for instance, is a collection of LLMs that functions as a conversational chatbot–when you give Gemini a prompt, it replies in a ‘conversational’ way to approximate an interaction with another person.


As a result of generative AI’s novelty and its wide array of applications in the average user’s life, Big Tech has seized on inserting it into everything from web search to online chatbots. While we struggle to fully appreciate the longer term consequences of hastily deploying generative AI models in every corner of our lives, the momentum of AI only grows. There doesn’t seem to be a consensus on the matter of whether AGI (artificial general intelligence) will ever materialize, or if it does, what risks it poses to humanity; at present, we’re still in generative AI’s infancy and our regulatory approach is still very much in flux.


Currently, AGI is still a hypothetical; theoretically, an AGI model could reason, synthesize information, solve problems, and adapt to changing external conditions like a human can. The risks inherent with such a technically capable form of AI are hard to understate. In short, when it comes to AGI: we don’t know what we don’t know.


An AGI that outsmarts its creators could become impossible to control, leading to a potentially devastating sequence of unintended consequences for humanity. An AGI that decides its values don’t align with human values, for example, could shut down power grids, launch massive cyberattacks against allied or enemy nations, and be used as a powerful tool in disinformation and social manipulation campaigns. 



Again, these concerns are all theoretical, but when we consider the rate at which AI as a computer science discipline is evolving, we shouldn’t discount the possibility of a future AGI.


The moral-ethical and regulatory concerns surrounding AI mount by the day, and state governments are only now getting to grips with what regulating generative AI in particular will entail. The Connecticut State Senate recently introduced legislation to control bias in AI decision-making and to protect people from manufactured videos and deepfakes. The state is one of the first in the U.S. to introduce legislation targeting AI, and the usual cadre of mostly Republican opponents has seized the opportunity to claim the bill will “stifle innovation” and “harm small business”. How, exactly, regulating generative AI will negatively affect the average small business in any meaningful way remains to be seen. 


But even so, opponents to the bill may have another point: complex legislation on a complex and rapidly-evolving subject is going to bring with it unintended consequences. We find ourselves in an increasingly dire situation, then, where our legislators–largely geriatric and unplugged from the modern technological zeitgeist–are writing ineffective legislation on matters they don’t even peripherally grasp. 


In fact, most computer science graduates working in their respective fields didn’t specialize in artificial intelligence, so we’re really desperately relying on a vanishingly small percentage of the population to navigate these fiendishly complex issues. AI and ML are rapidly evolving fields and, owing to their popularity, more students majoring in computer science are now focusing on AI, but there is a significant lag between graduating with a specialization and becoming an expert in that specialty. Generative AI in the form of Alexa telling us a joke is the thing we see, but the reality of trying to manage a world in which AI has embedded itself is the thing we don’t. 


Since the Internet’s rise to ubiquity during the 1990s, legislation and regulation have lagged behind the explosive growth of technological advancement. We’ve been fighting an uphill battle to try to elect representatives who understand technology from a voter base that also largely doesn’t understand technology very well. As this rate of change accelerates within the disciplines of AI and machine learning, we need experts in these fields who can respond effectively to these changes. In short, we are running up against the limits of effective governance when those doing the governing aren’t digitally literate. Of equal concern is the idea that many of our aging representatives are surrounded by legions of aides and advisors who may well whisper in their ears that AI needs no regulation while those same advisors buy stakes in companies developing their own AI models.


The recalcitrance that members of the Republican party have displayed on the subject of effective AI regulation is par for the course, but in this particular case their oppositional defiance is uniquely dangerous to the public. AI is a Pandora’s Box–we don’t know how an AI model will hallucinate or how far disinformation generated by AI will spread before a human hits the kill switch. 


By integrating generative AI into the social fabric, we’re essentially entrusting humanity’s combined effort and treasure to an entity that has to be constantly managed, reviewed, and course-corrected to behave in a sane and predictable way. This is a more monumental task than most people who only have a peripheral understanding of AI seem to realize. 


The meteoric rise of machine learning within the field of AI seems also to be ushering in a new kind of societal disparity: technological. Those who control the algorithms that make up the body of AI will have a certain degree of power over virtually every aspect of human life; maybe this was the endgame that companies like Meta, Alphabet, and Microsoft had in mind since the outset. As we discussed in an earlier article about YouTube’s methodology for promoting, recommending, and suppressing video content, how effectively can we regulate an industry when most of its doors are sealed to the public? 


It becomes increasingly clear that Big Tech is expediting its work in separating itself from society; they’ve spent the last 20 years digging a moat and creating a fiefdom that operates beyond the grips of the law. As companies like IBM, an AI forerunner in its own right, expand their influence by buying or killing the competition, power within Big Tech becomes more consolidated and key decisions in the realm of AI are made by fewer and fewer people.


Maybe all of this has less to do with AI and more to do with the notion that tech companies have developed a kind of power that the world hasn’t yet seen: the power to effectively manipulate reality. If we’re all living in The Truman Show and we don’t even know it, how would we know anything is wrong? Or, maybe we’re allowed to know there are problems with AI but only in a superficial sense. When algorithms guide you along a set of tram rails it must be asked: are these merely suggestions by the algorithm, or are they neatly packaged directives


On the other hand, discriminative AI works comparatively quietly in the background and to much less public fanfare, processing massive datasets that enable so many of the services we now take for granted. And there’s good and valuable work to be done here: as the Internet grows in size, so, too, does the volume of data companies and individuals have to manage and contextualize. 


Without discriminative AI models, not many of the digital experiences we enjoy would be possible. Even with AI, the amount of data that the rapidly growing number of devices on the Internet generates raises serious manageability questions for the future. There are nearly endless applications for discriminative AI in science, medicine, biotechnology, meteorology, climatology, and a number of other hard-science disciplines. 


As with so many evolving technologies in life, there are important, practical uses for AI in science, research, and engineering, but the potential for abuse on the consumer-facing side is so staggering that effective legislation really can’t come soon enough.


AI is a tool like any other. What we have to contend with in Big Tech is not so much limited to AI; we have to contend with a group of self-appointed technocrats who have, time after time, shown total disdain for the public good.


The list of companies who openly sell your personal data to third parties (when they aren’t losing that data to cyberattacks, that is) is long and ignominious. These are the companies who present users with 157-page Terms of Service agreements which in any other context would call for review by a lawyer. 


The same companies who can deplatform people or groups they find personally disagreeable. The very companies who can freeze your funds, revoke your domain, shut down your email, or delete your files–all, usually, with no real consequences from our intrepid regulatory authorities. 


So, the question then becomes: do you trust that tech leaders can and will self-police with tools as powerful as these?


 
 
 

Commenti


bottom of page