Over the coming years and decades, artificial intelligence will become a core part of all modern software.

Artificial Intelligence

In the years and decades to come, artificial intelligence (AI), a genuinely ground-breaking achievement in computer science, will be a fundamental part of all contemporary software.

This both poses a threat and a chance. Both defensive and offensive cyber operations will be augmented by AI. Additionally, new cyberattack techniques will be developed to exploit some AI technology flaws. Finally, AI’s demand for massive volumes of training data will increase the value of data and redefine how we must think about data protection. Prudent global governance will be necessary to guarantee that this era-defining technology will lead to broadly shared safety and prosperity.

Big Data and AI

In a broad sense, artificial intelligence (AI) refers to computational systems that can do some tasks in place of human intelligence. Similar to the exponential expansion that database technology witnessed in the latter part of the 20th century, this technology is currently developing at a breakneck rate. The foundational technology that powers enterprise-level software is now databases. AI is also expected to drive at least some of the new value that software will create over the next few decades.

Databases have undergone tremendous change in the last ten years to accommodate the “big data” phenomenon. This is a reference to the unparalleled scope and size of contemporary data sets, which are mostly derived from the computer systems that now serve as the mediators for almost all facets of daily life. For instance, each minute, YouTube receives more than 400 hours of video content (Brouwer 2015).

AI and big data have a unique connection. Most recent advances in AI development are the result of “machine learning.” This method trains AI by utilizing vast data sets as opposed to giving it a static set of instructions. For instance, artificial intelligence chatbots can be trained on data sets containing text recordings of human conversations obtained through messenger applications to learn how to comprehend what people are saying and formulate suitable responses (Pandey 2018). Big data could be seen as the most important thing that makes AI models and algorithms work.

Finding relevant insights among the overwhelming amount of data being collected is currently the fundamental barrier to innovation, not the difficulty of gathering and keeping information. Huge data sets can contain patterns that AI can identify that are invisible to the human eye. By using AI technology, even commonplace and seemingly unimportant data can become useful. For instance, based just on the Facebook posts a person liked, researchers have taught computer models to detect a person’s personality traits more precisely than their friends can (Wu, Kosinski, and Stillwell 2015).

AI and cybersecurity

There are news stories about high-profile data breaches or cyberattacks that cause millions of dollars worth of harm almost every day. Although quantifying cyber damage is difficult, the International Monetary Fund estimates that it costs the global financial sector between $100 and $250 billion per year (Lagarde 2012). Additionally, as computers, mobile devices, servers, and smart gadgets become more prevalent every day, more people are exposed to threats. The use of artificial intelligence in cyber security is expected to bring about even more radical changes, even though businesses and governments still have trouble understanding how important the cybersphere is becoming.

Automating tasks that previously would have required human intelligence is one of the primary goals of AI. Efficiency gains can be made by reducing the number of labor resources an organization must use to complete a project or the amount of time a person must spend on repetitive tasks. For example, medical assistant AI can be used to figure out what’s wrong with a patient based on their symptoms, and chatbots can be used to help customers with their problems.

An AI system can be trained using this data set to categorize future observations into one of these two classes. This is a simplified model of how AI could be used to improve cyber defense. Log lines of recorded activity from servers and network components can be classified as “hostile” or “non-hostile.” As an automated sentinel, the system can then tell the difference between odd observations and the everyday noise of a lot of activity.

This type of automated cyber defense is required to handle the astronomical volume of activity that must now be watched. We have crossed the point in complexity where artificial intelligence is no longer necessary for defense or the detection of hostile actors. AI is the only way that future systems will be able to handle the speed and complexity of cyber security.

Such AI models must be constantly retrained since, in addition to being used to stop attacks, hostile actors of all kinds are also utilizing AI to spot patterns and pinpoint potential targets’ weak points. Each side is always exploring the other and coming up with new defenses or ways to attack at the current level of play, and this battlefield is evolving at a rapid pace.

Spear phishing“—using personal information obtained about an intended target to send them a specially crafted message—is arguably the most effective tool in a hacker’s armory. A website linked to the target’s interests or an email that appears to have been sent by a friend has a good probability of evading suspicion. This approach now requires the would-be hacker to personally perform in-depth research on each of their chosen targets, which is highly labor-intensive. But by utilizing information from their surfing history, emails, and tweets, an AI akin to chatbots might be used to automatically create customized messages for a large number of individuals (Brundage et al. 2018, 18). An enemy could use AI in this way to greatly increase the size of their offensive operations.

AI can be employed to automatically find software security problems like “zero-day vulnerabilities.” It is possible to do this with either legal or illegal intent. Software developers might utilize AI to check for security flaws in their products, much like how thieves look for unpatched operating system vulnerabilities.

Along with enhancing current offensive and defensive tactics, AI will also create new fronts in the fight for cyber security as bad actors look for ways to take advantage of the technology’s specific flaws (ibid., 17). “Data poisoning” is a cutting-edge assault strategy that adversarial actors may employ. Since AI relies on data to learn, adversaries could manipulate the data set used to train the AI to perform whatever they want. “Adversarial examples” can offer yet another fresh method of assault. Similar to optical illusions, hostile examples include manipulating input data for an AI in a way that is most likely undetectable to a person, but is intended to lead the artificial intelligence to incorrectly classify the input. Many people think that a stop sign could be slightly changed so that an AI system in an autonomous vehicle would mistake it for a yield sign, which could be deadly (Geng and Veerapaneni, 2018).

The Value of Data Today

As its hunger for data changes what kind of information is a valuable asset, AI technology will change cyber security in yet another way. It will make a lot of information that would not have been interesting before into tempting targets for hackers.

While some cyberattacks only seek to disrupt, cause harm, or cause mayhem, many are designed to seize key assets like intellectual property. Attackers in cyberspace are increasingly engaging in long-term strategy as they seek to collect data for as-yet-unknown goals. The ability of artificial intelligence systems to use even harmless data has led to a strategy called “data hoovering.” This strategy involves gathering all available information and storing it for future strategic use, even if it is not clear what that use will be.

A recent article from The New York Times provides an illustration of how this tactic is used (Sanger et al. 2018). According to the study, the Chinese government has been held accountable for the theft of personal information from more than 500 million Marriott hotel chain guests. Although the potential misuse of financial information is typically the main concern with regard to data breaches, in this case, the information could be used to track down suspected spies by looking at travel patterns or to track and detain people so that they can be used as negotiating chips in other situations.

Data and artificial intelligence shouldn’t be considered separately; together, they link, unite, and unlock both intangible and tangible assets. Success in business, national security, and even politics is increasingly based on data volume, as the Cambridge Analytica debacle demonstrates. The Marriott incident demonstrates how relatively common information can now serve as a significant asset in the realms of intelligence and national security because of AI’s ability to extract insightful information from seemingly unrelated sources of data. So, people who work in this field will probably start going after this kind of big data more often.

Leave a Reply

Your email address will not be published.