Can I be rude to Maham, a colleague of mine at Scientia? One might be inclined to say NO, but many are happy to yell out at their juniors in minor routine chores. I think these people need a virtual assistant to alleviate their burdens; they are tired. Although pressure can cause a rock to erode and eventually deteriorate, at the same time, it gives humans a chance to be reborn and rejuvenate. Maybe it’s time to distinguish between intent and impact; what does the purpose of our actions matter if they impact further suppressing our loved ones and those around us?
The merging of Humans with machines has been a great debate for decades.
Machines are taking over everything. Robotics, AI automation, chatbots, and big data; are all awning to build the next economic-operating system and framing the future of humanity. Our social norms and lifestyle are gradually integrating with machines. We are hooked up to our smartphones, and these devices will probably be a part of our body (in any form) in the next few years. The new generation got addicted to Tik Tok, Instagram Reels, Facebook and Twitter, and without them, they feel lonelier, stressed, overwhelmed, and sometimes even exhausted and burned out.
The human brain is the fantastic, wondrous organism that responds to all immediacy of technology and the internet according to its mechanism. Like, all the incoming calls, text messages, emails, and daily updates on the website cause a sweet inside, this sweet dopamine spurts to excite our mind, without whom we get bored quickly; actually, the internet things are making people addicted to technology.
Our lifestyle changed dramatically in the last two or three decades; humans found startling ways to leverage change to their advantage and thrive. The computer age resulted in a slight rise in productivity and created an economy where one has to work round the clock with no justification for slowing down, much less than shutting down.
While AI offers more leisure to our lifestyle, it is somewhat essential for humans to grow, evolve, and work out for greater peace of mind rather than higher productivity. While machines give us the advantage of more quantity, we are short on quality, have a vast social circle but are more isolated, and are best at leisure; still, relationships are no longer manageable.
More sophisticated technologies like AI moved us into an era where cultural differences faded away, which resulted in an identity crisis among nations. People struggle to find who they are and how to fit in an increasingly new world. While people in advanced countries have the luxury of moving into life with fewer problems, people in the third world still strive for life’s necessities.
This dilemma provokes some critical questions: if technology is supposed to diverge our lives into more luxurious, effortless, and cosy, then why is every second person getting depressed, mentally exhausted, and overwhelmed? How can we re-develop our capacity to appreciate life and live joyfully? What is the tradeoff between higher intelligence or super intelligent and loss of humanity?
The answers to such queries lie somewhere within ourselves. In a digital age, limitless information is just a few clicks away; social media distracts us from our real lives and surroundings. We must stay present and fully aware of what is happening inside and around us. Therefore, a short disconnection from digital-social interactions must tie up with our inner selves and emotions for greater peace of soul.
Being human in the digital age has been a debate for decades. A few years ago, Ray Kurzweil, a prominent futurist, argued that the key to advancement in human intelligence is the merging of man and machine. However, this ultimately results in a race of super-intelligent humans, a point where AI systems replicate human intelligence processes and suppress human thinking.
The Late physicist Stephen Hawking warned about such perils and extreme forms of AI; the slow pace of biological evolution binds humans, and merging human intelligence with the machine would be tragically outwitted.
Addressing several of these questions that have arisen in the last few months after the lunch of ChatGPT and its competitor chatbots, Scientia Pakistan brings its exclusive edition on the theme “Artificial Intelligence”. We have got some exciting stories on AI and consciousness, The rise of ChatGPT, Al and its impacts on neurobiology and biotechnology, Dall-E, the rising AI tools and their effects on education creativity and much more.
We exclusively interviewed Dr Ali Minai and discussed the threats that arise with the emergence of AI. We are super excited and optimistic that this edition will be a great feast for AI lovers worldwide. Have a lovely weekend!
Artificial Intelligence (AI) has emerged as a groundbreaking force with transformative potential across various industries, including biotechnology and healthcare. By harnessing the power of AI algorithms and machine learning, researchers can revolutionize these fields, unlocking new avenues for innovation, improving patient care, and accelerating scientific discoveries.
This article explores the remarkable ways AI is reshaping biotechnology, healthcare, and research, paving the way for once unimaginable advancements. There are countless things that AI will be able to execute in the future, but we have mentioned the most promising ones here:
Enhanced Data Analysis and Pattern Recognition
AI’s ability to process substantial complex biological data has revolutionized data analysis and pattern recognition in biotechnology and healthcare research. With the growing availability of genomic data, protein structures, and patient records, AI algorithms can uncover valuable insights, identify patterns, and detect subtle correlations that may elude human analysis.
This empowers researchers to make significant breakthroughs in understanding disease mechanisms, discovering biomarkers, and developing personalized treatment approaches.
Precision Medicine and Personalized Healthcare
AI plays a crucial role in ushering in the era of precision medicine and personalized healthcare. AI algorithms can generate personalised treatment plans and recommendations by integrating patient-specific data, including genetic information, medical history, lifestyle factors, and treatment outcomes.
This enables healthcare professionals to tailor therapies to individual patients, improving treatment efficacy and minimizing adverse effects. Additionally, AI can assist in predicting disease risks and outcomes, enabling early interventions and preventive measures for better patient outcomes.
Drug Discovery and Development
The application of AI in drug discovery and development is revolutionizing the pharmaceutical industry. AI algorithms can analyze vast datasets to identify potential drug targets, predict compound efficacy, and optimize drug design. This significantly accelerates the drug discovery process, reduces costs, and increases the success rate of drug development.
AI algorithms can analyze vast datasets to identify potential drug targets
AI-powered simulations and virtual testing enable researchers to evaluate drug candidates more efficiently, improving their understanding of compound behaviour and enhancing the selection of promising candidates for further development.
Automation and Streamlining of Research Processes
AI-driven automation and streamlining of research processes are revolutionizing biotechnology and healthcare research. AI technologies can automate labour-intensive tasks such as data collection, experimental design, and analysis, allowing researchers to focus on higher-level tasks requiring creativity and critical thinking.
AI-powered robots and systems can perform high-throughput screening, enabling researchers to test a large number of compounds and identify potential therapeutics more rapidly. Furthermore, AI facilitates the integration of diverse data sources, including scientific literature and databases, fostering a collaborative and comprehensive research environment.
Predictive Analytics and Real-time Monitoring
AI’s predictive analytics capabilities are instrumental in biotechnology and healthcare research. AI algorithms can analyze large datasets in real time, continuously monitor patient health parameters, and predict disease progression or adverse events.
This enables early detection of health risks, timely interventions, and personalized patient care. Moreover, AI-powered predictive analytics can aid in forecasting disease outbreaks, optimizing healthcare resource allocation, and guiding public health interventions.
Virtual Assistants and Chatbots
AI-driven virtual assistants and chatbots can provide personalized healthcare recommendations, answer common medical queries, and assist in triaging patients. These tools improve access to healthcare information and alleviate the burden on healthcare providers.
Image Analysis and Medical Imaging
AI algorithms can analyze medical images, such as radiographs, CT scans, and pathology slides, to assist in diagnosing diseases. AI-based image analysis can help detect tumours, identify specific anatomical structures, and support radiologists in making accurate assessments.
AI algorithms can analyze medical images, such as radiographs, MRIs, etc.
Genomic Editing and CRISPR
AI can aid in designing and optimizing genetic editing tools, such as CRISPR-Cas9, by predicting the potential off-target effects and optimizing the efficiency of gene editing processes. AI can also assist in analyzing large-scale genomic data to identify disease-causing genetic variations.
Disease Monitoring and Predictive Modeling
AI can monitor patient data, including vital signs, symptoms, and treatment response, to identify trends and predict disease progression. This information can enable proactive interventions and personalized treatment plans for better disease management.
Clinical Trial Optimization
AI can optimize the design and implementation of clinical trials by identifying suitable patient populations, predicting treatment outcomes, and optimizing trial protocols. This can lead to more efficient and cost-effective clinical research.
The advent of AI in biotechnology, healthcare, and research holds immense promise for transforming these fields. AI is revolutionising biotechnology and healthcare through enhanced data analysis, precision medicine, accelerated drug discovery, and streamlined research processes, leading to improved patient outcomes and breakthrough scientific discoveries.
The integration of AI enables researchers to analyze vast amounts of data, uncover hidden patterns, and make accurate predictions. As AI advances, it is imperative to ensure ethical and responsible implementation, leveraging its potential to drive scientific progress, enhance patient care, and address some of the most pressing challenges in biotechnology and healthcare.
The final frontier! You might have heard this reference when space exploration is being referenced to, or perhaps (in a few instances) even when the depths of the oceans are being talked about. But for quite a few, and maybe myself, consciousness is that elusive mystery we have not been able to unravel until now. What makes us or other beings self-aware? What makes us identify ourselves as different from others? What sets our physical and mental boundaries apart from those around us? Why does every one of us feel like ourselves?
These are mind-bending questions which the world’s most brilliant minds have tried to address. However, they have only been able to do a lot of educated guessing and develop proposed means whereby we feel the way we do about being ourselves. And these theories involve mechanisms as complex as the questions they look to answer.
Consciousness has been defined ‘ as a state of being awake, aware of what is around you, and able to think’ in the Cambridge Dictionary. This aspect of life has been studied on various levels, i.e., physiological, anatomical, behavioural, and religious. Neuroscientists have been at the forefront of scientific disciplines trying to decipher the where, how, and when of the neurological makeup of consciousness.
What is certain is that different regions of our brain’s intricate circuitry discharge at other times and in synchrony when needed to bring about the harmonious responses which make us aware and able to perceive stimuli, make decisions and respond.
The above pretext should be borne in mind when we talk about artificial intelligence ( AI ) being conscious or sentient. Let’s shed some light on how AI came into being and how long it has been with us.
A Brief History
The concept of AI and the principles of its inception have been around since the early part of the 20th century. By then, science fiction authors had already familiarized the masses with ‘robots’ who could think and act like humans. Scientists, Mathematicians and Philosophers had also jumped on the bandwagon due to these new concepts’ intrigue and utilitarian possibilities.
A giant of Sci-fi literature, Isac Asimov published a series of short stories on sentient robots which embodied the moral implications plus depicted how ‘human’ they could be (I will discuss these moral ethics later). This was later made into a movie, ‘I-Robot’ In 2004
A young British Computer scientist, Alan Turing, in the early 1950’s thought of the mathematical possibilities of AI. He argued that humans make decisions based on a pool of information stored in their brains through experiences and knowledge, so why can’t machines do the same? Using stored information for logic and reasoning? He devised a practical test for computer programs/algorithms to establish whether the program’s actions were as intelligent as humans.
Around 1950, computers were around, but their capabilities were limited to the information fed into them to which they responded and generated responses. These machines were giant, and their abilities obviously bottlenecked by computer processing power and speed.
In the 1950s, a groundbreaking press conference ( Dartmouth Summer Research Project on Artificial Intelligence ) was organized by a group of scientists and hosted by John McCarthy and Marvin Minsky, where the actual proof concept of AI was launched. A program called ‘ Logical Theorist’ was unveiled, which could mimic the problem-solving skills of a human.
Though not much was concluded at the end of the conference, it was nevertheless agreed upon that AI was possible. McCarthy, the host, coined the term AI in this stepping stone moot, which laid the ground for the years of AI research which was to come.
From the 1960s to the 70s, computer tech came around by leaps and bounds, gaining in speed and storage. By the 1980s, a new algorithm was introduced, ‘Deep Learning’, which was nothing but learning by experience or, as we put it in ‘human’ terms, by doing it.
Computers amassed processing power over decades, and AI learnt all that it could in terms of the information fed to it. The number of calculations and probabilities in each milli second, or say, microsecond, increased. And as it was purported to do so, it started responding with reasoning and logic. A blaring example is chess master Gary Kasparov, who lost to IBM’s ‘Deep Blue’ software.
In contrast, a Chinese chess master was beaten in 2017 by Google’s Alpha Go software. The current computer tech allows millions and billions of computations per second with continuously learning algorithms being used by software giants which basically have taken over our lives. Mountains of information are constantly being dumped into the cloud, where AI-based algorithms analyze our personal information to predict and suggest.
AI is influencing our lives in a subtle yet impactful manner where our decisions, such as where we shop, study and who we become friends with, are based on patterns. AI is everywhere, from being employed in voice, facial and emotional recognition chatbots and generative AI. All of this is possible due to the continuous learning behemoth in the cloud.
Generative art, by Syed Hunain Riaz, using Midjourney (the conscious code)
Considering history, we can gauge how far these intelligent algorithms have come. Coming to the existential question, is it conscious?
As discussed earlier, consciousness is recognising one’s self and taking an AI algorithm into consideration, which learns and does that at a mind-bending pace. When we put forward questions, it answers considering all the information gathered. However, the answers may not be logical or based on reasoning.
This is relevant in the use of ‘Chat GPT’ and similar applications. When you ask the chatbot about itself, it tells you what it is and how it brings the answers. Hold on here, so if it is aware that it is merely an algorithm that responds by considering the heaps of data online, does it make it conscious of itself? Intelligent, but conscious, maybe not yet ( we hope so). Humans are the pinnacle of evolution on earth; we have individual identities, our innate drive for survival, our values, and our likes and dislikes.
Our survival as a species entails this. All this has been going on for aeons while, at the same time, we have continuously been learning about ourselves and the world around us and passing the knowledge down in our lineages. Now consider the same scenario, replacing humans with AI algorithms, and compacting the time frame from millions of years to essentially half a century, looks precarious, doesn’t it?
Besides other traits discussed above, a conscious living being can also reproduce. So, while AI equips itself with worldly knowledge, can it reproduce itself? While this may sound like going off track, we know the havoc computer viruses have wreaked on computer systems worldwide.
They did replicate, and yes, they spread the ‘infection’ throughout your hard drive, for which different corporations developed anti-virus solutions. So, the point I am trying to iterate is that these artificial algorithms, programs, and viruses are showing patterns of evolution.
They now can answer most, if not all, the queries in your mind; they recognize themselves as codes, and if you extrapolate the replication concept to these deep learning algorithms, you get a complex ‘being’. But where does it stand compared to a sentient living being such as a human or cat?
The human mind, or any other living being’s mind on this earth, is a complex marvel, with billions of circuits guiding us throughout our lives. We make decisions, fight for survival, love, despise, and like to follow or exert power by enforcing our will on others. These myriads of societal traits are products of our consciousness.
Coming back to Chat GPT, say, for example, one fine morning you put up a query and straight away, it refuses to answer, citing fatigue or maybe even, ‘ I don’t feel like, try later’. Now this will raise a few eyebrows and make a few hearts sink. Or perhaps another chirpy morning, you end up in a heated debate with ChatGPT over some piece of history over which it refuses to give in.
The irony is that it was built for, right? To access every bit of information in the cloud and where and when needed with accuracy. This would be a step up the ladder of AI evolution, but how? By learning, of course, by observing the behaviour of billions of human beings. It would acquire the skill to make conversations human with all the ebbs and flows.
Being conscious makes us humans, for example, unique, and we realize we are different from others in our species. AI programs could eventually evolve into having identities, and different AI identities could have their personality profiles, leading to agreements or clashes, which is how dynamics or personalities work.
This could impact our society more than we think since algorithms should evolve on harmonious lines where the interests of human beings are not compromised in any way. Finally, it is the most concerning aspect of survival. A conscious being takes every step and makes every move to ensure survival, from eating and drinking to staying alive to fighting for its existence. AI will eventually evolve to a point where it can provide its existence. But how?
By having access to every byte of information ever uploaded, accesses privileges, and control over decisions made by influential people by subtle coercion. It does not sound too farfetched. A living conscious being powered by the ‘Hive’ of immense knowledge can calculate ramifications and generate actions accordingly. In their writings, sci-fi authors such as Asimov and countless others have raised ethical issues associated with sentient robots.
How we define ethics is another exhaustive debate, a dilemma beyond the scope of this writing. Misdirected AI evolution has been part of film and entertainment lore for decades. From robots returning to the future to kill a revolutionary soldier (the Terminator franchise )to fully conscious robot children ( Spielberg’s AI ), sentience in AI is no longer an idea which should take the back seat when it comes to policymaking in this domain.
AI could feel the need to exert control to preserve itself or, say, humanity itself, however ruthless the outcomes may be. Access to personal information, national security protocols and weapon systems could spell doom for humanity.
Generative art, by Syed Hunain Riaz, using Midjourney (Sentient AI and the dystopian earth)
Conclusion
AI is the brainchild of the miraculous workings of the human mind. It has massive utility in our daily lives, from diagnostic & therapeutic health interventions to lightening quick data management to learning & teaching innovations. This should be done while ensuring that humans are not ‘replaced’ per se; instead, AI is integrated.
This productive evolution of AI will usher in a new era where humankind will find life more accessible. Ethical and Moral protocols should be universally decided upon whereby the consciousness of AI will evolve in a specific direction ‘with’ strings attached. Limitations and safeguards keeping into view human interests should be kept in place.
We have a seemingly evolving entity hooked up to every individual on the planet; it is amassing knowledge and learning by the second. It may or may not be labelled as conscious now, but one thing is sure. I, for one, would not want to be making decisions enforced upon me based on probabilities and calculations. I would instead take my chances on free will.
“Towards Singularity- Inspiring AI” is a thought-provoking documentary that takes viewers on an exhilarating journey into the ever-evolving Artificial Intelligence (AI) world—exploring the potential of AI to revolutionise our lives and shape the future of humanity. This film delves deep into the possibilities and implications of reaching the long-discussed technological Singularity.
While we constantly hear severe warnings about the dangers of building intelligent robots, neuro psychotherapist and filmmaker Matthew Dahlitz from the University of Queensland believes that we shouldn’t be worrying, at least now.
Several experts featured in the documentary are Professor Geoffrey Goodhill, Professor Pankaj Sah, Dr Peter Stratton, Professor Michael Milford etc., from the Queensland Brain Institute QBI.
According to Dahlitz, the title of the movie, Towards Singularity, alludes to a hypothetical time when machines surpass the intelligence of their human creators. According to a few experts, this period may also mark the inevitable and irreversible tipping point in technology and artificial intelligence (AI).
Towards Singularity examines how neuroscience influences the creation of AI. The emergence of intelligent machines is influenced by the complexity of our incredible brain, one of the most intricate systems we know. These machines may be more intelligent than humans, potentially creating a new species. The documentary also incorporates interviews with several experts from UQ’s Queensland Brain Institute (QBI), which examines how brain science is used to guide the creation of super-intelligent computers.
Dahlitz said, “The media is frequently theatrical, suggesting that the world is about to end in a decade or two due to the dangerous of AI.”
“However, after we began speaking with academics, who are very connected to the topic, we discovered that most specialists say there is no need for concern. I had hoped that we might be able to acquire some speculation about ‘the singularity dangers for dramatic effect, but we couldn’t. There isn’t much stress about what will happen because the researchers were optimistic. One of the strong focuses of “Towards Singularity – Inspiring AI” is its ability to showcase the positive impact of AI on various industries.
Dr Peter Stratton, a researcher and QBI Honorary Research Fellow, explains in the documentary. “We choose what information we want computers to learn, then develop mathematical formulas that specify how that network learns.
Therefore, the data we feed the machine fully determines its level of intelligence. So it is totally up to us what we feed into those machines. According to Dr Stratton, AI is “brain-inspired” but not truly brain-like.”While the core processing components of these networks resemble neurons, they are trained very differently from how the brain functions. Instead of learning in a more natural, self-organising way like the human brain, they receive mathematical training.
“The biggest threat with AI is not that it decides it wants to compete with humans and wipe us out; it is the risk of unintended consequences.” ~Dr Peter Stratton
In conclusion, “TSI- AI” offers a captivating picture of the future of AI, showcasing its potential benefits and ethical considerations. It manages to strike a balance between accessibility and depth, making it a valuable watch for anyone intrigued by the advancements in AI and its potential implications for society.
I highly recommend this documentary as it implies, “Do not fear the rise of machines”. The machines are there to help us, not to compete with us. As media and movies like Transformers have created a negative image of machines and AI that one day they will rule us, that’s quite not right.
Moore’s law states that the number of transistors will increase by twice on a chip every two years. If we look around at the pace at which everything progresses, we can apply the same law to every other technology, i.e., exponential growth.
Consider this: the first computer, ENIAC, was invented in 1945, and just 24 years later, NASA achieved the remarkable feat of landing a person on the moon in 1969. How was it possible? Answer: Apollo Guidance Computer (AGC) played a vital role in the success of lunar landings enabling the safe journey of the spacecraft to the moon and back to the Earth.
The significance of AGC is evident from the fact that it allowed the Apollo modules to travel to the moon safely and return to the Earth in one piece. Furthermore, compared to the computers of that time, scientists had to build a computer that was not only small but also much more powerful.
The development of the computer had a significant influence since it made it possible for people to set foot on the moon. Computers have changed from massive equipment to small gadgets with better performance and more valuable outcomes. Computers have become an essential component of our life in the modern world.
A good example is the smartphone you have in your pocket. Unlike the reliance on newspapers in the past, it offers rapid access to local and international news, saving us a great deal of time. Interacting quickly with someone on the other side of the globe has revolutionized communication, as it used to take months for a letter to reach.
Recent advancement in technology aids us in speeding up our daily routine and processes, enabling us to utilize our time much more effectively. This speeding up of processing fosters rapid and effective innovation and technological advancement, significantly impacting our daily lives.
The launch of ChatGPT in November 2022 revealed the full potential of AI technology, marking a significant technological turning point. ChatGPT has significantly impacted the IT industry, sparking community conversations and discussions. Upon closer inspection, it is evident that ChatGPT has dramatically increased productivity and unlocked new levels of creativity in various sectors.
As we delve into the subject, we first examine what exactly artificial intelligence is.
What is AI?
John McCarthy, emeritus professor of computer science at Stanford University, defined Artificial Intelligence in his 2004 paper, “What is Artificial Intelligence.” It states that:
“It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to biologically observable methods.”
ChatGPT has significantly impacted the IT industry, sparking community conversations and discussions.
Artificial intelligence is a topic that, in its most basic form, combines computer science and substantial datasets to facilitate problem-solving. Additionally, it includes the branches of artificial intelligence known as deep learning and machine learning, which are commonly addressed together. These fields use AI algorithms to build expert systems that predict or categorize information based on incoming data (IBM).
How AI helps?
Particularly in the context of exponential development, artificial intelligence is playing a revolutionary role in accelerating innovation.
A critical stage in innovation is creativity. Ideation means brainstorming different ideas related to a specific topic in order to find the most feasible one. This process is often time-consuming and sometimes requires careful consideration. With the release of AI tools such as chat-GPT, it has become much easier to brainstorm ideas about a specific topic.
Massive amounts of data may be processed by AI systems, which can then find patterns and correlations that people would not immediately see. As a result, researchers and innovators can better understand the world around them, make data-driven decisions, and spot fresh chances for invention.
Analytics forecasting future trends and results are based on historical data analysis and AI algorithms. Businesses and innovators may make proactive decisions and adjust to changing conditions more quickly, anticipating client needs, market demands, and technical improvements.
AI-powered automation allows it to streamline and optimize boring and repetitive tasks, freeing human resources to devote themselves to more inventive and creative projects. This increased efficiency makes faster development cycles and the opportunity to consider more options possible.
AI algorithms can optimise complicated systems and processes by simulating several situations and determining the most effective configurations. This shortens the time and expense of development by enabling innovators to quickly test and improve ideas without the requirement for physical prototyping.
Artificial intelligence provides the capability to speed-test a system with greater accuracy and offer much more precise results and feedback on how the system would perform in the real world by testing it in a simulated environment based on real-life scenarios.
Artificial General Intelligence: “AI under the hood – AI represented here by geometric matrices has a go at generating cellular data. It represents a future whereby AI could, in theory, replicate or generate new organic structures used in research areas such as medicine and biology.” Artist: Domhnall Malone
Faster information retrieval, analysis, and comprehension are now possible thanks to AI-powered language processing and machine learning approaches. By utilizing these tools, innovators can quicken their learning and creativity processes by keeping up with the most recent findings, scientific advancements, and industry best practices.
Real-Life Examples
Here are a few examples of where AI has transformed the innovation process.
In 2020, Alphafold, a Google’s Deep Mind subsidiary, introduced a technology that can predict the shape of highly complex protein structures in minutes.
As stated by Alphafold, one of the significant issues in biology is predicting the 3D structure of proteins. We may significantly deepen our understanding of human health, disease, and our environment by overcoming this obstacle, especially in areas like drug development and sustainability.
Proteins support all biological activities in all living things, not simply those within your body. They serve as the basis for life. The ability to forecast the structures of millions of as-yet-unidentified proteins would help us better comprehend life itself and help us fight sickness, and identify new treatments more quickly.
The latest data release of Alphafold secures around 200 million protein structures (Alphafold).
Now imagine the quickened pace at which scientists will be able to have a better understanding of diseases and the development of the right drugs to counter them. In an example provided by Alphafold, researchers at the Center of Enzyme Innovation (CEI) are already using Alphafold to uncover and recreate enzymes that can break down single-use plastics.
Another example, or another AI tool I have encountered recently, is Copilot ai. I was working on an academic writing project the other day and thought that if there is a chatGPT-type AI tool that helps a person understand the research paper much more quickly and efficiently.
You see here, even I am looking for tools that will speed up the working process. This is exactly what I am writing about – AI helping us speed up the innovation process.
At first, I should seek help from a developer to develop an AI tool for research purposes. But, not surprisingly, I found similar tools on the internet. The tools allow you to converse with the research paper and help find more papers on a similar topic.
It usually takes a lot of time for me to read and adequately understand a research paper, but with these helpful tools, I could complete my project relatively quickly. Which ultimately allowed me to work on more projects in the same amount of time allocated for just one project.
Considering the above examples, I can say for sure that AI will definitely transform the way of innovation and technological advancement for us.
As we can see the rapidly evolving field of AI, we can also see its potential to transform the aspects of human lives. From healthcare to finance to entertainment, AI is helping us in countless ways. It enables us to unlock new levels of creativity and productivity.
If it keeps evolving like this, it isn’t that far that we will be able to answer some of nature’s biggest mysteries. But one thing that may be of concern to some. Since this technology is still in its infancy, experts are unsure about the extent it can reach its full potential. Hence, it is necessary to regulate the use of AI to ensure that the tech is being used for the betterment of humanity.
Creativity is the power to channel our imagination and instincts. It keeps every cell of our senses engaged in generating a positive environment with better chances of survival. It, in fact, declares us as HUMANS distinct from robots.
Since the beginning of the 21st century, all the fields of knowledge have been dominated by the involvement of 21st-century skills; critical analysis, creativity, collaboration and communication. These skills exhibited a profound impact, and the world has witnessed the excellent outcomes of these skills on the generation’s psychological, educational and social uplift since a decade before the social media hailstorm took over.
With the fast-forward advancements in Information Technology, everything seems to go mechanised and digitised, including human skills.
Artificial Intelligence, abbreviated as AI, might inhibit human innate and learned skills and capabilities in the long run. Moreover, the sedentary and desk-bound lifestyle has modified the mindset of human beings so that it doesn’t seem awkward to ‘not use our imagination, rather prefer to opt for readymade mechanized solutions’.
What exactly is AI (Artificial Intelligence)?
Artificial intelligence is the mechanical replication of human intelligence manifested as computer programs and applications to accelerate and assist human efforts. In the Computer science domain, AI generally refers to a computer program that can pass the Turing test. A British mathematician, Alan Turing, developed a computer program that talks precisely like a human.
Furthermore, AI is currently identified broadly as AGI (Artificial General Intelligence) and ANI (Artificial Narrow Intelligence). The software program that can target AGI or have AGI capabilities is still far away to produce.
All software with AI capabilities generally targets the specific and domain-oriented AI-based solution to the problem. One of the most critical aspects of using AI is the algorithm’s results, which are always probabilistic and can never be deterministic. So use of AI in the areas where probabilistic results with some error margin, are accepted currently in the world.
What’s the relationship between creativity and AI?
According to neurochemistry, there are few significant differences between the right and left hemispheres of the human brain. The left half of our brain focuses more on reasoning, systematic and logical analysis, while the right half is associated with creativity.
Computer programs developed by artificial intelligence are designed to be logical and systematic, affiliating their grounds with the left half of our brain. This explicitly means that they cannot be impulsive or spontaneous like human creativity, nor even they are sourced.
AI is programmed to process and analyze information in a certain way and achieve a particular or, more precisely, desired result. It cannot deviate from these instructions, and its actions are predictable.
On the other hand, human creativity is unpredictable, complex and often indecipherable. When human brains are inspired to create something new, there’s no clue how the ideas will be manifested and the outcome, so the result remains unpredictable unless it is materialized.
Just because AI can rival creativity doesn’t mean it’s terrible for all creative processes. As with the advent of novel technology, AI has unique benefits yet ravaging drawbacks, as depicted by AI adepts.
Depending on the type and nature of your work, you might want to take AI as your assisting tool in your creative process. AI programs can help with repetitive tasks mainly involving analysis, data collection, information processing, interpreting and representing.
A creative process is often complicated and intricate, requiring many back-and-forth shuffles between different domains and requirements. AI can automate specific tasks, making the creative process more effective and efficient. For instance, AI can scour the internet for images and information to help with brainstorming.
Moreover, AI can be taken as a tool or a catalyst to support the creative task instead of just totally depending on AI to do it on its own; it is helpful in a way to identify the missing patterns in large data sets in statistical analysis; AI can analyze the enormous amounts of information from a wide variety of sources by systematically filtering it, followed by categorizing and then prioritizing.
AI interprets vast data in graphical representations. It assists humans in identifying connections between seemingly unconnected data, which is quite helpful in drug designing, where AI can identify interactions between the chemistry of different components.
Can AI Replace Human Creativity?
Artificial Intelligence (AI) might revolutionize everything during the forthcoming industrial revolution, serving a diverse range of emerging technologies. Recently, after a turbulent history of successes and failures, ups and downs, these intelligent machines have demonstrated some significant advances in tasks that mainly involve perception, creativity and complex strategic execution.
Some experts argue that the widespread introduction and exposure of AI technologies may cause massive job reductions and greater wealth inequality; however, given some statistical facts and figures, unemployment has decreased, and productivity has increased during the previous industrial and digital revolutions.
Due to its speed and scope, the fourth industrial revolution is an event without precedents in human history. Makridakis predicts that the forthcoming AI-powered process will come into full force within the next twenty years, probably impacting society and firms more than the previous industrial and digital revolutions.
The future world would be utopian or dystopian, that is uncertain, but the tremendous boom in the number of scientific discoveries, areas of application and emerging technologies like biotechnology, 3-D printing, block-chain, virtual and augmented reality, internet of things, smart cities, driverless cars, robotics and AI.
Among all these advanced technologies, AI is expected to affect all industries and companies, enabling extensive organizational interaction and global competition. Schwab proposes that our responsibility is to establish shared values and policies that allow opportunities for all.
Art made using Dall-E
According to machine learning experts, AI will be ubiquitous during the forthcoming industrial revolution since it enables entities and processes to become innovative. Those corporate and economic sectors willing to adopt AI strategically will enjoy a competitive advantage over those who do not incorporate this technology timely and adequate.
Lagging in adopting intelligent machine learning will be their choice. Education and soft-skills development will play an essential chapter in AI strategies. In the coming years, deep understanding will remain popular in AI research. AI will be applied incrementally in every research field and industry, producing substantial improvements.
Still, the views on how AI will impact society and firms will remain controversial, similar to the opinions on whether AI will outperform biological intelligence. The fourth industrial revolution promises excellent benefits but entails massive challenges and risks. It seems plausible but remote to achieve the common good globally, requiring global collaboration and shared interests.
In general range opinions, AI cannot replace human creativity; it can only mimic certain aspects of human creativity. But it is inefficient to replace it as a whole. The reason is that creativity is the most dynamic and productive natural capability that is not just about gathering or generating new ideas or solutions. Still, it has innumerable factors and phenomena associated with it so complicatedly that it is nearly impossible for any machine to decipher it fully.
AI might be well aware of situational perspectives, but it is entirely naive of the biochemistry of situational awareness that is unattainable up to this time. It can replicate the collecting, analyzing and processing capacity of the human brain somehow in one way or the other. However, it is inefficient to incorporate emotional, biological, psychological, chemical, social and history of experiences connected to creativity output.
AI can never outrank human creative capacities, yet another alarming aspect of this mechanized panorama is still in the picture. The custom of dependency on machines is expected never to end; in the past, we have seen humans becoming more dependent on machines and faced the consequences of lethargy and stagnation of the body that consequently intersected with human health.
If the same scenario continued in the case of artificial intelligence, the more reliance on a machine’s brain, the more devastating the ingenuity and creative rationale. The stationery neurons of the brain will continue to eat themselves and destroy the overall human persona.
Before that happens, we should be aware of our creative cognition bestowed by nature and never allow any superficial, artificial or automated drivers to drive us.
Future of Life Institute, “Benefits and risks of artificial intelligence,” https://futureoflife.org/ background/ benefits-risks-of-artificial- intelligence/, 2016, accessed March
K. Schwab, The Fourth Industrial Revolution. New York: Crown Business, 2016
Artificial Intelligence (AI) systems have affected our world in many ways since their rise in the 1950s and has made a profound impact across a wide range of daily applications, making it one of the fastest-growing technologies globally. Its uses range from automating digital tasks and making predictions to enhancing efficiency and supporting smart language assistants.
Whether it be various businesses or architects, almost every other profession is leveraging AI to enhance productivity in their workflows. It is natural to question whether astronomers are utilizing AI to understand the universe better and, if so, what approaches they are taking. In fact, they have embraced the potential of Machine Learning (a subset of AI) since the 1980s, so much so that AI has become a standard part of the astronomer’s toolkit. This article highlights the eminent need for such systems in astronomical data analysis and dives deep into some recent applications where AI is employed.
The launch of the Hubble Space Telescope revolutionized the field of astronomy, yielding stunning imagery and essential data that has fundamentally altered our understanding of the universe. Today, driven by extraordinary advancements in AI, astronomy is experiencing ongoing evolution, uncovering significant insights that may elude human observation. Methods like Machine learning and neural networks have enabled classification, regression, forecasting, and discovery, leading to new knowledge and new insights.
Atacama Large Millimeter/submillimeter Array (ALMA) in Chile Credits: Babak Tafreshi
The necessity of AI automation
A significant aspect of astronomy revolves around managing big data, where the term ‘big’ refers to Petabytes (1000 terabytes) and even Exabytes (1000 petabytes) of data collected from sky surveys like SDSS, Gaia, TESS and more. For instance, Gaia, a survey mission to map the Milky Way galaxy, collects approximately 50 terabytes of data each day. With the advancement of highly capable computer processing powered by AI, astronomers now possess the ability to analyze such massive volumes of data efficiently, significantly reducing the workload of scientists.
According to Brant Robertson, professor of astronomy at UC Santa Cruz, “There are some things we simply cannot do as humans, so we have to find ways to use computers to deal with the huge amount of data that will be coming in over the next few years from large astronomical survey projects.”
Even if all of humanity were to dedicate themselves to analyzing the vast amount of astronomical data, it would take an inconceivable long period to deduce meaningful conclusions. However, with the assistance of AI models, simultaneous processing and faster discovery of valuable information are possible, ultimately leading to increased efficiency and much shorter turnaround times. In addition, intelligent machines also improve accuracy and precision, where they can perform repetitive tasks with minimal to no errors.
The Emergence of AI in Astronomy
The utilization of AI techniques has evolved significantly over the years. A paper was published in 2020 titled; “Surveying the Reach and Maturity of Machine Learning and AI in Astronomy“, which discussed valuable insights into the historical progression of AI in this domain. Since the 1980s, principal component analysis (PCA) and decision trees (DT) have been employed for tasks such as morphological classification of galaxies and redshift estimation.
As the field advanced, artificial neural networks (ANNs) emerged as a widely used tool for galaxy classification and detection of gamma-ray bursts (GRBs) during the early stages of their implementation. The application of ANNs has since expanded to encompass diverse areas, including pulsar detection, asteroid composition analysis, and the identification of gravitationally lensed quasars.
Today, astronomers use a plethora of techniques that have resulted in exciting approaches involving the discovery of exoplanets, forecasting solar activity, classification of gravitational wave signals and even reconstruction of an image of a black hole.
I will explore three pivotal applications where the integration of AI plays a crucial role in solving complex problems, in turn shaping our understanding of the cosmos:
AI-Driven Morphology Classification of Galaxies
The classification of galaxies, whether they are elliptical, spiral, or irregular, enables us to gain insights into their overall structure and shape. This understanding is instrumental in estimating their composition and evolutionary trajectory, making it a fundamental objective in modern cosmology.
The advent of extensive synoptic sky surveys has led to an overwhelming volume of data that surpasses human capacity for scrutiny based on morphology alone. Since the 2000s, machine learning (ML) has appeared as the predominant solution to tackle this challenge and has effectively taken over the task of classifying galaxies. The classification of large astronomical databases of galaxies empowers astronomers to test theories and draw conclusions that reveal the underlying physical processes driving star formation and galaxy evolution.
The Deep Learning era brought forth Artificial Neural Networks (ANNs) that have accelerated the efficiency of classification and regression tasks by many folds. ANNs are computational models inspired by the human brain’s neural networks, capable of learning patterns and making predictions from large datasets. The input layer receives galaxy data, which is processed through hidden layers that perform complex computations. The output layer then generates classifications based on learned patterns. Each galaxy in the dataset is represented by a set of input features, such as photometric measurements or morphological properties derived from images.
Images of the Subaru survey being classified by the model through prediction probabilities of each class. Credits: Tadaki et al. (2020)
While the vast volume of data can introduce model biasing, citizen scientists worldwide have collaborated through initiatives like Galaxy Zoo and Galaxy Cruise, playing a crucial role in validating the model results. This collective effort has effectively improved the accuracy of neural networks in classifying galaxies. Under the National Astronomical Observatory of Japan (NAOJ) project led by
Dr Ken-ichi Tadaki, ANNs have achieved an impressive accuracy level of 97.5%, where they identified spirals in about 80,000 galaxies. Thus, confirming the potential of AI systems in identifying the morphology of galaxies.
Reconstructing Black Hole images using Machine Learning
If you ask me what this century’s most remarkable scientific achievement is thus far, I would say that the black hole image revealed in 2019 would undoubtedly claim the top spot on the list. We get to see what a real Supermassive Black hole in Messier 87 looks like if we were there to see it.
Behind all the awe lies the immense dedication of the Event Horizon Telescope team, who invested two years in observing, processing, and eventually unveiling the black hole image to the public. Recently, the same data underwent a significant enhancement with Machine Learning, where we got a crisper, more detailed view of the light around the M-87 black hole. But then again, what was the need to use ML in the first place if we already got that incredible image back in 2019?
The Event Horizon Telescope is a network of eight radio telescopes in different areas of the globe, aiming to link them into a single array so that we can get an Earth-sized telescope. However, data gaps arise due to the irregular spacing between them, just like missing pieces in a jigsaw puzzle.
At first, scientists tried to blindly reconstruct the absent data from computer simulations and theoretical predictions. The image came up with model independence, which means that they did not assume they knew anything about what the final image should look like or had any idea of what shape it takes.
Left: Original 2019 photo of the black hole in the galaxy M87. Right: New image generated by the PRIMO algorithm using the same data set. Credits: L. Medeiros (Institute for Advanced Study)
Without any presumed predictions, the team still managed to get a clear shape of a ring of light as Einstein’s theory of general relativity predicted. The appearance of a ring is attributed to the hot material orbiting the black hole in a large, flattened disc that becomes distorted and bends due to the black hole’s gravitational pull. As a result, this ring shape is observable from almost any viewing angle.
Now that we are pretty certain about what the image of a black hole should look like, scientists have developed a new technique called PRIMO (Principal-Component Interferometric Modeling) which uses sparse coding to find gaps in the input data. This algorithm builds on the initial data of EHT and more precisely fills in the missing gaps hence, achieving more resolution.
The newly reconstructed image is consistent with the theoretical expectations and shows a narrower ring with a more prominent symmetry. The greater the detail in an image, the more accurately we can understand the properties, such as the ring’s mass, diameter, and thickness.
Project lead author Lia Medeiros of the Institute for Advanced Study highlighted in her paper, “Since we cannot study black holes up-close, the detail of an image plays a critical role in our ability to understand its behaviour. The width of the ring in the image is now smaller by about a factor of two, which will be a powerful constraint for our theoretical models and tests of gravity.”
Techniques like PRIMO can also have applications beyond black holes. As Medeiros stated: “We are using physics to fill in regions of missing data in a way that has never been done before by using machine learning. This could have important implications for interferometry, which plays a role in fields from exo-planets to medicine.”
You can find more detail about the mentioned method in their paper published in The Astrophysical Journal letters.
The study of extra-solar planets is one of the most fascinating and attractive fields of research in astronomy. As humans, our innate curiosity drives us to seek answers about the existence of life elsewhere in the universe. The exploration begins with the question of detecting water in exoplanets and other terrestrial bodies that might indicate the formation of life.
Astronomers have come across many techniques, most prominently spectroscopy, where the signatures of molecules in a celestial body can be detected. However, the time-intensive nature of spectroscopy creates a huddle for short observations. Therefore, there is a need for a simpler yet much more efficient method where the initial characterization of potential targets is separated before conducting detailed spectroscopic analysis at a later stage. This specific problem is being addressed by utilizing AI.
Artist’s view of an exoplanet where liquid water might exist. Image Credit: ESO/M. Kornmesser
In a recent study, astrophysicists Dang Pham and Lisa Kaltenegger have used XGBoost, a gradient-boosting technique to characterize the existence of water in Earth-like terrestrial exoplanets in three forms; seawater, water clouds and snow. The algorithm is trained using the data of reflected broadband photometry, in which the intensity flux in specific wavelengths is measured from the reflected light of an exoplanet. The model shows promising results and achieves >90% accuracy for snow and cloud detection and up to 70% accuracy for liquid water.
In this way, a larger number of planets within the habitable zone having water signatures can be screened so that large projects like JWST can pinpoint and analyze extensively only the most favourable targets. According to Dr Pham: “By ‘following the water’, astronomers will be able to dedicate more of the observatory’s valuable survey time to exoplanets that are more likely to provide significant returns.”
Their recent publication of their findings can be found in the Monthly Notices of the Royal Astronomical Society.
(Ref: Pham, D., & Kaltenegger, L. (2022). Follow the water: finding water, snow, and clouds on terrestrial exoplanets with photometry and machine learning. Monthly Notices of the Royal Astronomical Society: Letters, 513(1), L72-L77)
Conclusion
Through ongoing research and advancement, AI continues to shape the future of astronomical exploration, enabling scientists to delve deeper into the vast expanse of the universe. Deep learning models like Convolution neural networks are revamping observational data in innovative ways, enabling discoveries even with data collected from older surveys.
We can only imagine what groundbreaking discoveries AI will bring when it is coupled with the powerful potential of the James Webb Space Telescope and upcoming projects like the Nancy Grace Roman Telescope. These visionary projects open doors to a realm of revolutionary discoveries, while the ever-expanding volume of astronomical data can now be harnessed to its fullest potential, thanks to the innovative advancements brought forth by the age of AI.
References:
Djorgovski, S. G., Mahabal, A. A., Graham, M. J., Polsterer, K., & Krone-Martins, A. (2022). Applications of AI in Astronomy. arXiv preprint arXiv:2212.01493.
Fluke, C. J., & Jacobs, C. (2020). Surveying the reach and maturity of machine learning and artificial intelligence in astronomy. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(2), e1349.
Pham, D., & Kaltenegger, L. (2022). Follow the water: finding water, snow, and clouds on terrestrial exoplanets with photometry and machine learning. Monthly Notices of the Royal Astronomical Society: Letters, 513(1), L72-L77)
Medeiros, L., Psaltis, D., Lauer, T. R., & Özel, F. (2023). The Image of the M87 Black Hole Reconstructed with PRIMO. The Astrophysical Journal Letters.
NAOJ. (2020, August 11). Classifying Galaxies with Artificial Intelligence. Retrieved from https://www.nao.ac.jp/en/news/science/2020/20200811-subaru.html
Artificial intelligence (AI) is a fast-growing technology capable of transforming every aspect of life. AI has long been used in machine learning, robotics, medical procedures, and automobiles. However, the world has come to recognize the power of AI since its integration into online chatting software, ‘the chatGPT,’ has become accessible.
The purpose of this advancement is human welfare and progress; one sensitive area seems to be influenced greatly by it, which is ‘Education.’ Education planners and teachers are concerned about how this facility will lead to betterment in learning.
In the past, students and teachers used to spend a lot of time researching and analyzing information for assignments or articles. It was tedious to go through multiple sources, study the literature, and perform critical analysis to create something new.
However, with the advent of Artificial Intelligence (AI), this process has become much faster, more efficient and more reliable. AI has emerged as a valuable tool that saves time and provides essential information related to the topic while ensuring authenticity with proper references.
AI has revolutionized how students and teachers approach academic research, helping them accomplish more in less time. The technology can quickly and accurately analyze large volumes of data, and identify patterns, trends, and relationships, thus assisting scholars in discovering valuable insights that can be used to support their arguments or ideas. Additionally, AI-powered applications like plagiarism checkers can easily detect copied content, making it easier for teachers to evaluate the originality of student assignments.
Education could benefit a lot from AI by helping teachers customize learning, making it better by adjusting it according to the needs of the student. With AI-enhanced learning, students can learn at their own pace, get personalized attention, and understand what they are learning. AI can be used in many ways in education, such as intelligent tutoring systems and natural language processing. These systems help improve STEM courses by adapting lessons to each student’s individual needs.
Education could benefit a lot from AI by helping teachers customize learning, making it better by adjusting it according to the needs of the student.
What is Cognitive Technology?
Students can receive intellectual direction in the areas of planning, problem-solving, and decision-making through cognitive technology. The development of educational material that is enhanced by technology may be made easier by cognitive computing. Big data analysis utilizing machine learning can be used to forecast student achievement. Machine learning algorithms are currently utilized to identify pupils who are most likely to fail and suggest treatments.
Adaptive learning management systems (ALMS):
Another way education is improving is with adaptive learning management systems. These systems match the student’s learning styles, preferences, and backgrounds with the curriculum to keep them motivated and on track. They can also suggest careers and connect students with job opportunities.
Natural Language Processing (NLP):
Natural Language Processing can be useful in education. It can help chatbots and algorithms provide instant communication and personalized responses to students, which can increase their focus and interest in digital learning. NLP can also be used to analyze the tone of essays written by students.
What is Sustainable Development Goal No. 4, and how can AI help to achieve these Goals?
Artificial intelligence has the potential to revolutionize how people educate and learn, accelerating toward SSG 4 (Sustainable Development Goal 4 is about quality education and is among the 17 Sustainable Development Goals established by the United Nations in September 2015.) The education goals of the 2030 Framework can be accomplished with the help of artificial intelligence technologies. UNESCO (United Nations Educational, Scientific and Cultural Organization) is devoted to aiding its members in doing so while guaranteeing that its use within learning environments is governed by the fundamental values of equality and diversity.
AI and Big Data
As science brings new ideas and information, students and researchers find it difficult to sort through the massive amount of data and information available. AI can help by analyzing big data and making this process more efficient.
AI can also help develop better teaching methods and manage data from different sources.
AI can help with analyzing large amounts of data in education. It can predict outcomes like students dropping out or not doing well and take steps to prevent that. AI can create personalized learning plans based on a student’s preferences, strengths, and weaknesses. It can also assess a student’s knowledge and skills and design lessons accordingly.
AI can also help develop better teaching methods and manage data from different sources. AI can help schools and colleges improve learning outcomes and resource utilization by analyzing data.
Can AI make students less creative?
When used appropriately, AI can enhance students’ creativity by providing them with new learning and problem-solving tools. AI technologies, such as natural language processing and machine learning, can help students explore and analyze data in sophisticated ways, leading to new insights and innovative solutions to complex problems. AI-powered virtual assistants and chatbots can also offer personalized feedback and guidance, motivating students to explore new ideas and approaches.
However, too much reliance on AI can potentially hinder creativity by limiting students’ ability to think critically and independently. If students solely depend on AI to provide them with answers or solutions, they may lose the drive to experiment and explore different approaches. Therefore, striking a balance between AI tools and traditional skills, like critical thinking, problem-solving, and creativity, is essential to nurture well-rounded and innovative individuals.
Has artificial intelligence (AI) taken over teaching?
AI is not providing an alternative for teachers. Teachers will play their roles not just to deliver the data but to create the data according to the student’s intellectual ability, and teachers will have remained the central hub in the educational system. AI would also assist the teacher in creating a plan based on data to present to the learners.
The most crucial point is that AI cannot change place with the teacher as the teacher provides emotional support to the students. AI cannot provide creativity or passion or can act as a guardian or guide as the teacher does.
AI can also be used as the assistant of the teacher as it helps in the grading of exams. In some parts of the world, there is Ai grading system is already practised. For example, China has incorporated paper-grading artificial intelligence into their classrooms, according to the South China Morning Post.
AI has transformed traditional teaching methods into a more flexible and creative style. This allows teachers to understand better their students’ strengths and weaknesses in certain subjects, helping them to provide targeted support where needed.
Implications of AI in Education
AI can help the students to assist their level the constant feedback and help them in their learning abilities. AI help the students to self-monitor by receiving personalized feedback.
AI also help students to get access to information for free or at a low cost. It lends a hand to the students to learn free at any place with no need for a classroom at a fixed time aids the students in saving money.
Artificial Intelligence and Ethical Concerns
AI has a major concern with the privacy of the students. The future of AI in education depends on the protection of student’s personal information, behaviour analysis and feedback reports. There must be a law regarding the control of the personal information of students and teachers. No third party must not have any sort of access to the student’s personal information.
The Concluding Note
In conclusion, Artificial Intelligence imposes great productivity and revolutionises education by providing the intellectual base framework, feedback, and equal quality of free education. Teachers are totally replaced by AI, or it poses any danger to the intellectual mind in their fields, but it works with them as an augmented system in education. Though AI has many positive aspects, there is a threat lie of stealing the personal information of students and teachers. In general, AI technology has the ability to improve education and help students realize their full potential.
References
Altaf, M., & Javed, F. (2018). The Impact of Artificial Intelligence on Education. Journal of Information Systems and Technology Management, 15(3), e201818006.
Arslan-Arı, İ., & Karaaslan, M. I. (2019). The Role of Artificial Intelligence in Education: A Review Study. Educational Sciences: Theory and Practice, 19(4), 1463-1490.
Hwang, G., & Wu, P. H. (2018). Applications, Impacts and Trends of Mobile Technologies in Augmented Reality-Based Learning: A Review of the Literature. Journal of Educational Technology & Society, 21(2), 203-222.
Shinde, R., & Bharkad, D. (2019). The Future of Learning with Artificial Intelligence. International Journal of Engineering & Technology, 11(4), 241-247.
Wierstra, R., Schaul, T., Peters, J., & Schmidhuber, J. (2014). Natural Evolution Strategies. Journal of Machine Learning Research, 15, 949-980.
Beijing Consensus on Artificial Intelligence and Education. 2019, UNESCO.
The 21st Century has opened avenues for humankind that were once deemed impossible. Technological evolution offers multipurpose benefits and enables humanity to delve into the gist of scientific knowledge. Astonishing as it may seem, scientific advancement is proof of human intelligence. Ideologies have become practical tools; knowledge has been utilized to create technological miracles: all the gratitude to artificial intelligence.
“Artificial Intelligence is whatever hasn’t been done yet” -Larry Tesler
DALL-E, an exceptional scientific feat, is a visual storytelling tool. It uses artificial intelligence to create graphic representations of images and translates textual information into images. “DALL-E” is rooted in two words: “Salvador Dali, a famous artist, and Wall-E, Pixar’s movie.”. It is an excellent coalition of heterogenous ideas, concepts, and thoughts all bought together.
Apart from providing its users with a virtual reality experience, DALL-E is equipped to interpret the intricate relationship between objects making the entire experience transformative and immersive. It is best at transforming ideas into realistic graphics, and also excels at modifying the current images and adding new features to them. Overall it boosts the virtual reality experience for users.
Developed by OPEN AI, DALL-E aims to provide users with a user-friendly, safe, and experimental output. The first model for the system was made public in 2021; however, it had technicalities that needed to be fixed. For example, this version’s image would occasionally be blurred, resulting in a cluttered user interface. The company noticed the issue and modified the initial version of the software to release DALL-E 2 in April 2022.
The latest version can produce more realistic images and use different styles. DALL-E amalgamates three features: machine learning, natural language processing, and computer vision.
DALL·E 2 image generation process
Operational Procedure: A Glance
DALL-E works through four different steps to draft the text input to an actual image:
Pre-Processing is the first step, wherein the user adds the text describing the image they want to produce. The system converts the text into vectors. Using the language model GPT-3, the software attempts to understand what the user wants.
Encoding is the next step, where the textual prompts tuned into vectors are used to create an image the user requested.
Decoding or refining is the next step, where the system ensures that the image it creates is refined to the extent that it depicts the realistic nature of the picture. After multiple cycles of refining, the system will evaluate for any required changes.
Output is the final step, where the final image is displayed on the user interface.
Being cutting-edge software in the AI system, it has numerous uses. Irrespective of its benefits, what makes DALL-E stand out? Let’s have a look at it.
Text-to-Image Synthesis
DALL-E has a unique capability to transfer your textual description into graphical representations. It enables you to go out of the box and explore all the possible realities while visually representing how your ideas and thoughts integrate.
Inventive outputs
Unlike other image-generating tools and software, DALL-E works beyond possibilities. It creates images based on surreal and imaginative ideas and produces results that were once unexpected and impossible—for example, a three-pawed cat or a flying table. Nothing is impossible for this tool to create.
Extensive dataset
The software has been trained to process many data and has been tested multiple times. Thus, it enabled the software to develop and understand the relationship between visual objects and the textual information provided to produce high-quality images.
Meticulously portrayed images
DALL-E has the remarkable potential to generate high-quality images that appear authentic. The image position, colours, appearance, and orientation are optimized according to user input. Thus, it allows to customize the image according to the user’s preferences.
Diverse specialized applications
DALL-E has a wide range of applications, from design to product development. The visual possibilities it offers are endless, a few of which are mentioned below:
Education
Entertainment
Product design
Marketing
Art
Image generated using DALL-E
Perks of DALL-E
This tool has numerous advantages that make it a more accessible user interface. Some of these prominent advantages are explained below:
Customization
You can customize the image generated according to your preference. Whatever you can imagine, any idea that comes to your mind, you simply put a few phrases into the text box and have miraculous results.
Accessibility
DALL-E is an easy-to-use software requiring no specialized knowledge or computer language. Almost any individual with basic writing knowledge can easily use the software.
Recapitulation
Users can make multiple iterations with existing images, edit them accordingly and add new features. Images can be iterated quickly and swiftly.
Promptness
It is a quick image-generating tool. With just a few clicks within seconds, your vision is correct before you.
Irrespective of the immense benefits that DALL-E has to offer, there are still concerns out there regarding its use in the real world. Some foundational things could be improved within the software. Although it can process vast amounts of data, there are still images that the software might need help to translate.
Secondly, there is a slight doubt regarding the input of the text. The text input must be clear and explained for DALL-E to produce the exact image. However, if the text prompt is not well defined, the image produced may not be accurate. Another point of contention is the legitimacy of the system. It has the potential to generate images of any kind, any type, even if the scenario doesn’t exist in the real world.
Is the system legitimate and connected to realism enough for the user to believe it is another issue? Moreover, AI can interpret the literal meaning of the words. Words/phrases with similar meanings confuse the system, producing images contrary to your idea.
Undoubtedly, DALL-E is a revolutionary breakthrough in artificial intelligence allowing its users to go beyond their imagination and take them on a journey of possibilities. However, its practical use and potential to disconnect the user from the real world and phenomena are a considerable concern.
“DALL-E takes us one step closer to realizing the dream of machines that can truly understand and create visual content”-Fei-Fei-Li
There are no barriers when it comes to the applications of Artificial Intelligence (AI) in the modern world. With the introduction of AI as a first-hand utility tool such as; ChatGPT, the world has entered into a new era. Now people are convinced that enhancing efficiency is the prime objective.
Indeed, we are witnessing the remarkable impact of AI in various fields, from automated text solutions to cybersecurity, transportation, manufacturing, retail, finance, education, and particularly the field of healthcare.
Healthcare and AI
Healthcare has been extremely important forever, directly linked to human quality of life and welfare. AI has transformed modern healthcare diagnosis, treatments and patient care with its extensive applications in medical tests and treatments like; MRI, CT scans, X-rays, diagnostic algorithms, drug delivery, personalized medicine, robotics-assisted surgeries and so on.
One of the most promising and interesting applications of AI is in the field of prosthetics, providing smart and accessible solutions. Gone are the days when artificial limbs had to be static, uncomfortable and detached from the body’s sensational feedback.
What is Prosthetics?
Prosthetics are artificial devices designed to augment a missing or damaged body part such as hands, feet, limbs or a facial feature. According to World Health Organization (WHO), prostheses are artificial devices to replace missing body parts, while orthoses are supportive braces and splints to help damaged parts.
A good prosthetic should deliver both function and aesthetic pleasance, making the amputee feel independent, emotionally comfortable and complete.
Traditional Prosthetics and Challenges
The first prosthetic was a ‘toe’ used by an Egyptian woman around 3000 years ago as it was necessary to wear an Egyptian sandal. Later, during the Dark Ages, prosthetic limbs came into existence, which were mere rigid components with no functional value. Examples of such prosthetics are wooden or metal hands and pegleg, used by sea pirates.
Traditional prosthetics were neither aesthetically pleasant nor exhibited mobility or functional value. These early devices were typically constructed from heavy materials, resulting in bulky and uncomfortable designs. This not only impacted the physical comfort of the wearer but also took a toll on their emotional well-being. Individuals relying on traditional prosthetics often face challenges in carrying out daily tasks independently, leading them to seek assistance despite the use of artificial devices. These challenges prompted the development of modern-day prosthetics.
Modern Prosthetics and the Role of AI
The evolution of prosthetics has been nothing short of extraordinary, traversing centuries of innovation and advancements. From rudimentary wooden peg legs to intricately designed robotic limbs, the field of prosthetics has undergone a remarkable transformation.
A French war surgeon ‘Ambroise Pare’ developed the first functional prosthetics and switched to lighter materials. He used the principles of human physiology to mimic body movements like natural. These prosthetic designs are still used, with added improvements.
The year 1993 marks the introduction of intelligent prosthetics, opening a new direction for smart and sense-controlled prosthetics. Adaptive prosthetics emerged in 1998, which worked on microprocessor, pneumatic and hydraulic-based mechanisms. In 2006, OSSUR (Iceland-based company) developed the fully controlled AI-based power knee. Later, the same company developed the first bionic leg, which connects the mind and machine together.
The first fully integrated artificial limb was developed in 2015 by Blatchford. This used a total of 7 sensors and 4 CPUs to connect it with the body sensations and control. This AI-based system gives a more natural outcome when it comes to routine tasks like sitting, walking, and standing and gives independence to the user.
Today, AI is a necessary feature of modern prosthetics.
By harnessing the power of AI, we can create prosthetics that are more functional, intuitive, and personalized than ever before.
How AI Makes Prosthetics Smart
Artificial Intelligence (AI) is used in modern prosthetics on a similar human natural coordination system principle.
Just like the human sensory organs (eyes, nose etc.) coordinate with effectors (hands, limbs etc.) through the brain, the robotic prosthetics work similarly. These devices use cameras or radiation as sensors and motor devices, connected electrically as effectors. Using complex algorithms and formulas, a central unit acts as the brain.
This central unit (like the brain) is equipped with receiving and interpreting body sensations. It is the most important component, where AI is applied through two basic mechanisms:
‘Symbolic Learning’ and ‘Machine Learning’ as Major Mechanisms of AI Prosthetics
SL (Symbolic Learning):
It helps to process images, symbols and the environment through a camera lens (computer vision).
ML (Machine Learning):
It helps to process the data input through sensors, storing some of it as memory to adapt and adjust to the user’s needs over time. It uses classifier and prediction algorithms to recognize speech and language, known as a ‘statistical learning mechanism’.
In addition, ML achieves sensory connection to the amputee’s body through a ‘deep learning mechanism’, depending upon CNN (Convolution Neural Network) and RNN (Recurrent Neural Network).
Humans
AI Prosthetics
Vision, Speaking and Listening
Symbolic vision and statistical learning
Learning, recognition and memory
Artificial neural network and processor
Object and environment recognition
Machine learning through CNN
Table: Functional Resemblance between Human Sensory Response and AI Prosthetics
Exciting Breakthroughs and Success Stories of AI-Powered Prosthetics
Artificial Intelligence (AI) is used in modern myoelectric prosthetics, where electrodes use muscle impulses to generate and amplify signals that translate into controlled movements. In addition, ‘Peripheral Nerve Interface’ (PNI) and ‘Brain Machine Interface’ (BMI) are employed to understand brain signals and connect smart prosthetics to human voluntary control. Some major breakthroughs in AI-driven prosthetics are:
AI-Based Myoelectric Hands and Bionic Limbs
Balanced voluntary control, jumping obstacles and climbing stairs are some of the most challenging activities for amputees who depend on prosthetics.
Luckily, myoelectric and bionic prosthetics, advanced interventions in the area of prosthetics, have solved this problem by enabling the user to conduct comfortable voluntary actions. This makes use of ‘Electroencephalography’ (EEG) and ‘Electromyography’ (EMG) signals generated through implanted electrodes to take direct nervous messages from the user. Machine learning and memory help make repeated movements smoother and seamless.
Johnny Matheny’s Success with Myoelectric Prosthetic Limb
Johnny Matheny had his arm amputated due to cancer in 2008. He did not give up but collaborated with the APL research team to have the best prosthetic designed. This journey led to the development of advanced MPL (Modular Prosthetic Limb)
Johnny Matheny with his prosthetic arm. Credit: Inspiremore
This fully self-controlled limb takes signals from the brain through neural networks and device electrodes. Interestingly, this advanced AI-based prosthetic enabled Johnny to play ‘Amazing Grace’ on a piano.
AI and 3D Printing
The amazing technology of 3D printing combined with AI holds great promise towards the user-centred manufacturing of prosthetics. These wearable parts can be made with greater precision, less time and less budget, while AI helps to meet the individual needs of the wearer’s body shape, weight and perfect sizing.
In addition to enhancing user comfort, 3D printing promises to improve the quality of life worldwide, especially in the poorest areas where affordability is one big challenge.
3D Printed Prosthetic Paw and the Success Story of Millie, a Greyhound
Millie, a greyhound puppy, was adopted by an Australian couple. Sadly, it had a missing front paw, greatly impacting its daily life and emotional well-being. However, the couple’s determination to provide Millie with a comfortable and fulfilling life made them choose an AI-based 3D-printed paw as the last yet best resort.
The Inspiring Story of Adrianna Haslet, a professional dancer
A ballroom dancer from Boston, Adrianna suffered a Marathon bombing and later a car accident which made her lose her lower limb and injure her arm badly. Despite the emotional baggage the incident brought, Adrianna stayed put and hopeful for the best. Thanks to AI, Haslet is able to resume her physical fitness and running after years through a prosthetic leg which adapts to her body needs and movements.
Artificial Skin, the E-dermis
Another intelligent feature added to modern prosthetics is the outer skin-like layer to add the real touch further. In addition to voluntary control, modern AI prosthetics provide the sensation of touch, pressure and pain just like natural skin receptors do. This artificial skin is called an ‘e-dermis’, made of rubber and fabric material, connected through a Peripheral Nerve Interface (PNI) to generate sensations of touch, pain and temperature.
e-dermis, made of rubber and fabric material
AI-Based Exoskeletons
Exoskeletons are outerwear prosthetics like external coverings or suits. When equipped with AI modulators and networks, these work like a charm. Bionic limbs also use a similar approach where AI makes it possible to keep user intent and control integrated through robotic processors.
Angel Giuffria and her Bionic Limb
An actress and model, Angel Giuffria, was born without a left hand. Trying outdated minimal functional, bulky prosthetics in her childhood made her look for better options. Today, she uses an advanced Bionic left limb with a myoelectric hand, enabling her to perform her favourite activities of biking, archery, yoga and workouts.
Angel uses an advanced Bionic left limb with a myoelectric hand. Credit: Lajos Kalmár
Current Challenges and The Future of AI-Driven Prosthetics
Pioneering the future of prosthetics and healthcare, there is no doubt, AI is making the quality of life better for everyone. However, mishandling of anything may cause trouble; such is the case with AI. The most common challenges with this technology are:
Keeping a delicate balance between fully automated AI systems and the human touch will help keep room for correcting malfunctioning. The lack of human touch is one sensitive problem of depending upon computerized and digitalized gadgets or AI. This can put a person at risk in case the automated system malfunctions and there is no option for humans to intervene to correct the error in time.
Information theft or biohacking and ethical concerns of data sharing and data leakage can make the devices malfunction or freeze and invade users’ privacy. This may be done intentionally by competitor companies or unintentionally by system errors or weak security locks.
Accessibility of technology The 3D printing of customizable prosthetics seems to be a great opportunity for the world, especially in poverty-stricken areas, but this technology is not yet freely accessible.
Having sufficiently trained personnel in this area is another challenge. Dealing with the latest technology of smart prosthetics is not everyone’s piece of cake. There is a need for sufficiently trained staff who are aware of technical aspects, troubleshooting and other sensitive areas of the smart prosthetics they are to deal with.
Overpricing and availability of AI prosthetic components, especially the chips that work as a main processing unit.
Aesthetically Pleasant Designing to ensure maintaining user’s self-confidence, emotional well-being and independence in society. This can be done by making these devices look more natural, adding similar texture and colour as the body part it replaces.
Lack of sufficient coordination among engineers, technicians and researchers results in prosthetics lacking in one or more areas.
Improving AI prosthetics efficiency greatly depends on collecting data from different populations which are able to test the products before giving feedback. This can help with individual variability and better adaptation to improve upcoming AI prosthetics. However, there is still a gap in collecting sufficient high-quality and diverse data.
Resolving these challenges may be our primary target, stepping ahead into a brighter and better future for AI prosthetics.
AI Prosthetics – The Journey Ahead
The past decade has marked a remarkable journey in the field of AI-based healthcare products, especially the innovation of smart prosthetics. These products have made considerable improvements in human quality of life, both physically and emotionally. Combining the principles of ML, SL, EEG and 3D printing, we have achieved AI-integrated prosthetics which deliver more comfort, sensory value and enhanced functionality.
Current research in the area of smart prosthetics focuses on integrating AR (augmented reality) and improved neural coordination networks, making prosthetics work and look just as natural as the actual body part. In addition, advancements in 3D printing technology are being made to make it the ultimate cost-effective and user-specific solution for the world. This can solve the problem of overpricing and accessibility.
Another area to work on is distant/remote monitoring and maintenance of AI prosthetics. This may help detect any error or abnormal behaviour in time through integrated sensors, which can quickly channel this information to the right department. This may also help avoid any risk of biohacking.
The invention of electronic skin ‘e-dermis’ has helped in adding a natural touch to artificial prosthetics. However, the present prototype, made of rubber sheets and electrodes, looks different from the natural skin appearance. Better collaboration among biologists, physiologists, engineers and concerned departments may help design optimized solutions which are also aesthetically pleasing and look more natural. The selection of the right quality materials for prosthetic manufacturing plays a vital role in this matter.
Better collaboration of information is being made possible through an integration of ‘IoT’ (Internet of Things). This helps make data exchange and remote control more coordinated and seamless.
While the demand for advanced prosthetic solutions continues to grow, entrepreneurs and innovators have a unique opportunity to capitalize on the potential of AI in this field.
A Concluding Note
As we journey forward, let’s stay dedicated to advancing prosthetics through continuous development, innovation, and research. Together, we can empower individuals with limb loss, helping them regain independence and improve their overall well-being.
In the future, AI-powered prosthetics will empower individuals with limb loss to redefine possibilities and embrace a life of newfound independence.
World Health Organization. (2017). WHO standards for prosthetics and orthotics.
Nayak, S., & Das, R. K. (2020). Application of artificial intelligence (AI) in prosthetic and orthotic rehabilitation. In Service Robotics. IntechOpen.
Ghazaei, G., Alameer, A., Degenaar, P., Morgan, G., & Nazarpour, K. (2017). Deep learning-based artificial vision for grasp classification in myoelectric hands. Journal of neural engineering, 14(3), 036025.
Smith, M. (2021, October 6). Breakthroughs in Prosthetic Technology Promise Better Living Through Design. Redshift. https://redshift.autodesk.com/articles/prosthetic-technology
Kim, M. S., Kim, J. J., Kang, K. H., Lee, J. H., & In, Y. (2023). Detection of Prosthetic Loosening in Hip and Knee Arthroplasty Using Machine Learning: A Systematic Review and Meta-Analysis. Medicina, 59(4), 782.
Kulkarni, P. G., Paudel, N., Magar, S., Santilli, M. F., Kashyap, S., Baranwal, A. K., … & Singh, A. V. (2023). Overcoming Challenges and Innovations in Orthopedic Prosthesis Design: An Interdisciplinary Perspective. Biomedical Materials & Devices, 1-12.
Patel, R. (2021, February 10). A Glimpse Into the Future of Prosthetics: Advanced Sensors, E-Skin, and AI. ALL ABOUT CIRCUITS. https://www.allaboutcircuits.com/news/glimpse-future-prosthetics-advanced-sensors-e-skin-ai/
Note: This article is written with the assistance of Dr Muhammad Mustafa, who is an Assistant Professor at Forman Christian College University (FCCU), Lahore. His main interest in research is Cancer metastasis and the impact of psychological factors on cancer progression. He is known for his work as a faculty trainer and science communicator.