[Title by Google’s Bard, Answers all human]
As Google celebrates its 25th birthday, what does the future of artificial intelligence hold, and how does it serve as a threat to the internet? Belfort Group’s Drew Vaughan poses the following questions:
In light of Google’s 25th anniversary, what advancements might the search engine make using artificial intelligence?
Google has had a fantastic run as the search engine of choice for the past 25 years. Most of you don’t remember life before Google. Very few of you may remember Alta Vista, the king of search before Google became its successor. It seemed then that there was no need for a better index to the fledgling internet. But Stanford students Sergey Brin and Larry Page came up with algorithms that produced much more relevant results, and the rest is history.
Fast forward to 2023. Still, the overwhelmingly dominant search engine, Google (Alphabet), is the go-to portal for literally billions of people. Now accepted as a verb, “Google it!,” it’s hard to believe that it can be dethroned. But we know empires don’t last forever. Could AI be the catalyst technology that allows that challenge?
Let’s set aside the DOJ and European Commission lawsuits that are more immediate, and likely less consequential, threats to its dominance as they may force Google to make room for others. That intent is to make a more level playing field, but it’s a long way to a breakup of Alphabet. If some of its businesses become independent, Google search would stay the dominant engine, even if it must share profits with independent advertising services, for instance.
The only way to topple the empire, then, is with superior technology. So far, before AI, we haven’t seen other search engines be much better, if at all. The competitors like Firefox and Duck Duck Go tout privacy, while Microsoft’s Bing (and Yahoo powered by Bing) are working very hard to be real alternatives. Bing gets installed and is the default on many Windows devices. While ChatGPT has given Bing got a big boost, but it will have to do more as Google improves its own AI features. As for the others, while we all want privacy, that ship has set sail. Clearly, the public cares more about functionality than privacy. Maybe to the public’s peril, but the market speaks.
Although Google is arguably the most successful search engine, do you think it is safe from threats imposed by AI?
There are two questions here. First, is AI a threat to all conventional search engines, and second, will Google’s LaMDA-based Bard and SGE (Search Generative Experience) be competitive with other Generative AI (Gen-AI) tools?
There are different search queries we make. Quick generalized answers with little consequence for preciseness. For Instance: asking for weather; general product information; questions that we kind of know the answer to but want confirmation; or questions that we don’t have a lot invested in and we are not really bothered if the answer is inaccurate. These are well suited to using Gen-AI. We just want the answer and not an intermediate step toward an answer. Google and other search engines already do this to an extent, with the answer at the top, then ads and other URL’s to get more precise or validated answers. Here, Gen-AI will do well. Then, there are searches that require precise and correct information. We cannot tolerate a good guess or a crowdsourced answer. Either here we will want a conventional list of sources that we can go to for trusted answers, or we might depend on a fully trusted tool which may be Gen-AI powered, but trained on trusted, vetted, and monitored data.
Perhaps the new model will look less like search and more like a discussion we are having with an LLM (Large Language Model). Maybe we will be less directed on a particular answer and more toward just learning something. “Tell me about …” It will have to do the work of scouring the internet and curating an answer. This is more likely to replace Wikipedia than search. The question is, can it do as good a job as Wikipedia which is curated by humans?. We’re back to the question of trusted sources and “alternative fact” (aka: not true) pollution of the training data sets.
Is Google going to be able to compete with other AI models out there to deliver satisfactory answers? The short answer is yes. Although Bard initially was a truly disappointing and probably rushed response to the impressive capabilities of ChatGPT, there is little question as to the resources and capabilities of Google engineers, and its ability to acquire talent or companies to equip them adeptly.
How might AI-powered chatbots and virtual assistants manipulate search engine results by promoting biased or false information?
This is the critical issue. Although there is a lot of biased and false information scattered all over the internet today, individuals can select ones that they trust (rightly or wrongly). With little transparency, and sometimes not humanly possible, it is difficult to determine the knowledge thread an LLM might follow to deduce an answer. Biased and false information might be displayed as fact. Bad actors will undoubtedly exploit this weakness with bots of their own making, to overwhelm real facts and less biased information.
This is the biggest challenge of the AI era and our biggest threat to society. As we depend more and more on AI, there is less oversight that can come to bear on the output of unconstrained AI and monitoring it will become more difficult. In response, a plethora of newly developed, application specific, constrained AI tools trained on trusted data is and will be coming to market. They will be particularly popular inside well-controlled corporations. But general knowledge, provided by tools like ChatGPT and Bard, open to the public will find a lot of questionable, biased, and false information. Google is working on this shortcoming as are others. Maybe some will get better, but it is based on judgment, and we know that different companies, media, and pundits have dramatically different ideas of what is right, wrong, biased, or just.
Should a centralized body become the arbiter? Perhaps, but this is nearly impossible to build, get support, and empower it to regulate.
How do you think AI will offer change and opportunities for search giants like Google?
AI will fundamentally and completely change our society. It was already beginning, but the fantastical debut of ChatGPT and its accessibility to the public highlighted the capabilities of AI in terms everyone could see. That awareness supercharged its development. Google, Microsoft, and others like Vectara and Elasticsearch all see the opportunities in new ways of using AI in the knowledge and search business. The public will not-so-slowly become aware of these capabilities and will gravitate to the ones most capable, trustworthy, and satisfying. There has been an explosion in interest, innovations, progress, excitement, and funding for AI. Even though the technology has been in development and use for decades, it has captured the imagination as well as proven its competence in all things we do and depend on. Many new opportunities for the developers of AI and AI applications will flood the market, even while it will threaten the jobs of many who can be replaced by this technology. This will mark an unevenness of opportunities for individuals. But Google? They are in the catbird seat.
In what ways can AI-driven clickbait and content manipulation negatively impact the user experience and trustworthiness of search engines?
We have been manipulated by advertisers and the media forever. The difference with AI is that it can respond and target individuals with incredibly personalized precision. It is also capable of continuous recalibration and redirection as we respond to its enticements. We are at a disadvantage to AI in that it can predict our responses and continually hone its message to resonate with us. Some might argue that it is beneficial, as it will understand us and therefore serve us better, but in a capitalistic society, where businesses depend on convincing you that THEIR product is what you really want, it is hard to see exactly who the beneficiary is. Balanced information and our ability to experience things not predictable by our past habits and interests may be at risk.
Interestingly, we may trust the results of AI more, because we can relate better, and it is responsive to us. But we are being manipulated masterfully. That is worrisome.
What are some security and privacy concerns associated with AI’s ability to gather and analyze user data from search engine queries, and how does this threaten user trust?
The public has all but given up on privacy. Related, but different, security is of very high concern both for individuals and top of mind for companies. For those using AI with sensitive information, their data may find its way onto the internet and eventually in training sets for the future. Also, through the improvement programs at OpenAI and Google, a data breach is possible, and we would likely never know it. It is best to assume that anything you type may find its way back into the public domain. Most of us as individuals use google search today knowing that our data may be used in many ways without our explicit permission or knowledge. Companies, however, depend on company secrets to maintain a competitive advantage. Many companies today are prohibiting the use of Gen-AI unless it is a tool provided by the company.
Security, or that of protecting data from ransom seekers or destruction, is a serious and massive problem given that virtually all of our valuable data is on-line. AI plays a role here in that bad actors can use it to fool someone and compromise their data integrity or security. It can be made to appear more believable than a fixed script. Engagement, relevance, and customization have a lot to do with trust. Some are thinking about this problem, but awareness might be our only tool for now.
What measures can search engine providers take to mitigate the threats posed by AI, ensuring the accuracy, trustworthiness, and fairness of search results?
Companies and specialized AI products do this by controlling the training data, and then imploring employees and contractors to use only sanctioned tools. If employees are made aware, and follow policies religiously, threats can be reduced, but never eliminated.
For the general public, there are few ways known to be able to keep from producing wrong, biased, and unfair results from being produced. And this is likely to get worse. Warnings and transparency on the dangers might be the best way for search engine providers to make users aware of these issues and educate everyone to check and double-check results before depending on them. Especially when the results are consequential. In that case, more application-specific and trustworthy AI will become available and hopefully in broader use.
Researchers continue to study this, and there will be some features that will help reduce the problems. It is unlikely that they will be fixed in any limited timeline.
Things will get worse before they get better.
Are we doomed?
Reporting in Michael Lewis’ latest book, “Going Infinite,” he quotes Sam Bankman-Fried and his Effective Altruism buddies claiming that AI is the biggest existential threat, giving it a 1 in 10 chance of wiping out the human race. This puts it well above the threats of a killer virus pandemic, climate change, and nuclear disasters or war. That is a very, very scary number and one in which none of us should take lightly. Although this is an eccentric set of young people who are not AI experts, they are math whiz geniuses that, even if corrupt (likely), their considered and collective conclusions should not be dismissed out of hand.
Companies, like Google, have the resources to invest deeply in research aimed at reducing that threat, at the same time as increasing AI capabilities, usefulness, and availability.
AI is a giant leap for mankind, extending our knowledge capabilities and promising to accelerate understanding, productivity, and progress in every part of our lives and society as a whole. It will come with progress and setbacks, good and evil, success and disappointments. As long as we don’t cede control, AI holds promise for a better world.