Artificial intelligence and machine learning, these terms are interchangeably used. AI is an emerging field of science that involves machine coding to perform human tasks. The term was first invented in the 1950s when scientists started exploring how computers could solve a problem on their own. We usually take this blessing for granted that how our brains effortlessly calculate the world around us, every second. While AI is the broad science of mimicking human abilities, machine learning is a specific subset of AI.
Machine learning models look for patterns in data and try to draw conclusions like us. Once the algorithm gets really good at drawing the right conclusions, it applies that knowledge to new sets of data.
Science never solves a problem without creating ten more. George Bernard Shaw
AI serves as a blessing and at the same time one of the most significant threats to the human’s personal security as well as to the businesses in the form of Cybercrimes. Lets discuss the risk factors associated with it.
It is an email or communication-based technique used to target and steal information from specific users. In this scheme, hackers targeted pretending to be individuals and corporates, in tricky online links that download malware on users’ computers so that the hackers can obtain sensitive information like login and passwords. They usually ask to download credentials and other personal details.
In 2016, a paper published at DEF CON, in which an experiment described by researchers from zeroFOX1. In this experimental work, a neural network built that was able to write phishing messages on Twitter through the learning process, using topics previously discussed by the specific target. The success rate ranges between 30 and 66 percent, reported, comparable to the success rate of manual spear-phishing efforts as per the authors’ opinion. This automated targeting ability is one of the offensive uses of AI and machine learning.
One WIKILEAKS published campaign emails leading up to the 2016 election campaign; chair John Podesta was spear phished. He clicked on a link to a spoofed google web page that said someone has used his password and urged him to change it. He complied and gave hackers access to his email account. US intelligence officials claim that it’s a part of a Russian government plan to influence the 2016 presidential elections. A Russian group that targeted DNC, was behind the attacks on the athletes at the Olympics through spear phishing. The group obtained and published confidential information about US Olympic athletes like gymnast Simone Biles.
We typically think of it as pieces of code or malware that can-do damages through a computer system like when the U.S. and Israel created a computer worm called Stuxnet that forced 1,000 of Iran’s 6,000 nuclear centrifuges to spin out of control, rendering them useless.
Another incident happened in late December 2016, when one-fifth of Ukraine’s capital city of Kyiv lost power. No phone chargers, no Christmas lights, no electric heat. The Russian-allied hackers used malware called Crash Override to disrupt the system, and with some modifications, it might be able to disrupt the U.S. electrical system, too. The incident in Kyiv was only one of a handful of times a cyber weapon has been used successfully to attack a country’s infrastructure.
President Barack Obama signed off on a secret plan to plant digital bombs in Russia’s infrastructure. The project was still in its planning stage when President Trump took office. The U.S., Israel, Russia, China, Iran, and North Korea have cyber weaponry at their disposal, but the situation is still not clear like does the damage have to be physical or can it be economical?
Russian hacking and influence campaigns during the 2016 election amounted to an act of war. Sen. John McCain
This statement could challenge the way we think about weapons in cyberspace, but some say this is an act of espionage, not war, meaning Russia’s meddling wouldn’t violate the U.N. charter prohibition on the use of force. The definition gets a little more complex when we talk about the intent. Homeland Security Adviser Tom Bossert said,
“These exploitative pieces of code are more like hammers than they are missiles, meaning that in the right hands they can do good, but they become weapons when the person controlling them intends to do harm.“
It would outline how and why the U.S. can use cyberweapons giving adversaries an idea where the U.S. draws a line and what they can expect if they cross it, but it’s not clear yet that how it would look like and how strict it would be. It is also not determined whether it would apply to U.S. interaction with specific countries or with the entire world.
Tampering with artificial minds: BadNets
Obscurity of the learned knowledge is one of the unfortunate features of artificial neural networks and similar technologies. The knowledge learned by the neural network exists as a collection of real-valued numbers, which may be hard to explain in terms of meaning even for a well trained professional and that’s how it is different from the traditional algorithms that can be read, written, and understood by qualified software engineers. It also makes it difficult to spot when something is going wrong. We can understand this by taking an example, like if we Imagine an adversary tampering that controls a mission-critical system like bank loan approval or self-driving car, with an artificial neural network.
An attack may be tough to spot If such modification is furtive and the damage it causes could be ascribed to natural causes or lead to equipment failure. Researchers from New York University published a pre-print article in which they explored the possibility of creating such BadNets. They designed a faulty street sign recognizer in one of the experiments. It was able to recognize stop signs as speed signs by the attachment of individual stickers.
Another broad category of cybercrimes defined in the convention is computer-related offenses which are also known as non-digital crimes that use computers as a tool of the crime. Computer systems are now misused on the production, storage, and distribution of illegal material, such as child pornography and copyright-infringing content. Such fraud and imitation related cybercrimes were also focused explicitly in the convention.
Seeing isn’t believing anymore due to the dark side or offensive use of Artificial Intelligence and machine learning which includes the creation of deceptive content which seems very realistic and harder to detect. You may have probably heard of deep fake, the software or technique based on A.I. that use for Human Image Synthesis. It was by a Ph.D. student who now works in Apple, Ian Goodfellow, in 2014. With the help of this software, one can automatically replace the face of one person with another to alter digital photographs and videos.
By using a machine learning technique called Generative Adversarial Network (GANs), one can merge or superimpose the existing images and videos clips with the source image and videos.
Adult entertainment was the primary purpose of its creation at the start, but the technology has the potential of fake video production which depicts top management attending non-existent meetings, or they show to be involved in activities that could be damaging for their companies and businesses. In the same strain, Lyrebird3, the speech synthesis company, offers creation of realistic-sounding speeches in the voice of the target individual created based on short speech samples studied and mimicked by the A.I. Thanks to cybersecurity providers that now we can detect the present-day photo and video forgeries by using forensic techniques, as the technology is evolving. However, Besides, there is severe reputational and psychic harm is associated with it as in the current era of fake news and revelations; the actual truth behind bad publicity tends to vanish into the shadows.
Link to similar posts: https://scientiamag.org/a-unique-story-of-cyber-crime/