27.1 C
Pakistan
Monday, July 7, 2025
Home Blog Page 19

Efforts against Nuclear Warheads

In recent days, cinemas around the world have been buzzing with audiences flocking to watch Hollywood most awaited “Oppenheimer,” directed by the five-time Academy Award-winning director Christopher Nolan. 

Released on July 21st, it captures the gripping history of American physicist Julius R. Oppenheimer, who led the development of the world’s first atomic bomb(Trinity) during the Second World War. Successfully tested in July 1945, this landmark event led to the devastating bombings of Hiroshima and Nagasaki in August, causing immense loss of innocent lives and shaping the course of the modern world. 

The aftermath of these events spurred the establishment of the United Nations, signalling a new era of global cooperation and concerns over the potential threats posed by nuclear weapons development and testing during the Cold War.

As the movie attracts viewers, A member organisation of the International Campaign to Abolish Nuclear Weapons (ICAN), a Nobel Peace Prize-winning institution, brings the issue of nuclear prohibition to the forefront in Nepal.

Experts assess the Nuclear Non-Proliferation Treaty 50 years after it went into effect. Brookings Institution
Experts assess the Nuclear Non-Proliferation Treaty 50 years after it went into effect. Brookings Institution

ICAN reports that over nine countries possess atomic weapons, amassing more than 13,000 nuclear arsenals. Among these countries are Russia, the United States, China, France, the United Kingdom, Pakistan, India, Israel, and North Korea, while several others, like Italy, Turkey, Belgium, Germany, Netherlands, and Belarus, host nuclear weapons.

In response to the growing threats and destructive potential of nuclear arms, the United Nations General Assembly voted in 2017 to negotiate a legally binding treaty to prohibit and eliminate nuclear weapons. Thus, the Treaty on the Prohibition of Nuclear Weapons (TPNW) was signed, aiming to abolish nuclear weapons worldwide wholly. 

The treaty covers a wide range of prohibited nuclear weapon activities, including the development, testing, production, acquisition, possession, stockpiling, use, and threat of use of nuclear weapons. 

Additionally, it outlaws the stationing of nuclear weapons on national territory and the lending of support to any State engaged in illegal activity. The Treaty imposes additional obligations on States parties, including taking necessary and appropriate action for environmental remediation in areas under their jurisdiction or control that have become contaminated due to activities related to nuclear weapon testing or use.

Voted for by 122 state countries on July 7th, 2017, the TPNW opened for signature on September 20th of the same year. Following the deposit with the Secretary-General of the 50th instrument of ratification or accession of the Treaty on 24 October 2020, it entered into force on 22 January 2021 by article 15 (1). 

Nepal, a non-aligned country in the Himalayas, signed the TPNW in 2017, but its ratification process has yet to start in its parliament. 

Last week, the President of Froum For Nation Building(FNB-Nepal), Nirmal Kumar Upreti, submitted a memorandum to Nepal’s Ministry of Foreign Affairs officials. FNB-Nepal is a member organisation of ICAN. 

Upreti submitted the memorandum requesting Foreign Minister NP Saud to take the initiative to ratify the treaty. He also drew attention to the Nuclear Materials Management Division of the Ministry of Education, Science and Technology.

Nepal has also signed Non-Proliferation Treaty (NPT) and the Comprehensive Nuclear-Test-Ban Treaty (CTBT). Only the NPT has been ratified. Ratification of CTBT is overdue.
The head of the Division, Bishwababu Pudasaini, also said that the ministry has had the necessary talks about the ratification of the CTBT which is overdue. The ministry has arranged a program for concrete plans for its ratification in the current fiscal year program.

With the new Hollywood film, nuclear weapons and their abolishment have again drawn attention to the global world. ICAN’s advocacy of its rectification once again reverberates across Nepal’s political sphere, with the need for ratification gaining renewed attention. 

In 2007, the ICAN campaign was first started in Australia to discourage nuclear weapon production, use, and testing. In 2017, the United Nations General Assembly passed the Convention on the Prohibition of Nuclear Weapons. In the beginning, 53 countries signed it, and more than 122 countries favour it. Fifty-five countries have approved it since January 22, 2021. In South Asia, Maldives and Bangladesh have only rectified this treaty. 

Being sensitive to the devastating human consequences of nuclear weapons demands their complete elimination, that they not be used again under any circumstances in the future. According to ICAN, a single nuclear warhead detonated in New York would kill 583,160 people.

Nepal participated as an observer in last year’s meeting in Vienna, Austria. ‘Nepal, being a signatory party nation, participated in the party conference last year as an observe as it cannot use the right to vote in the meeting of the Convention until it rectified it from its parliament, ‘ Upreti Said. According to Nepal’s Treaty Act of 1990,  any treaty is legally recognised only after the Parliament passes it.

In the past, The Ministry of Foreign Affairs had sent the TPNW file to the Prime Minister’s office for discussion when KP Oli was the Prime Minister. It was given to review by the Social Development Committee of the Prime Minister’s office for a comprehensive discussion. However, the committee did not discuss it, as the current government sent the old files to all the relevant ministries and instructed them to bring them through the new process. 

According to an officer from the Ministry of Foreign Affairs of Nepal, a new treaty ratification process is about to start. ‘Former foreign minister Bimala Rai Paudyal showed interest and prepared to take it to the cabinet. But after her party left the government, the file is stuck in the ministry,’ he said.

Pudasaini, head of the nuclear material management division of the Ministry of Education, Science and Technology, said that this treaty is good for the disarmament of nuclear weapons.

Similarly, in 2019 and 2021, Amrit Bahadur Rai, the Permanent Representative of Nepal to the United Nations, said that preparations for the treaty’s ratification had been started in various forums.

Raju Khanal, a Professor at the Central Department of Physics of Tribhuvan University and a nuclear physicist, says that there is no place in Nepal where naturally radioactive material is abundant and that there is no situation to make weapons and use them. 

He says, ‘One thing is that there have not scientifically proven the exact amount of radioactive substances found in Nepal and have not studied them enough. There is no need to panic when those substances are used in Nepal.’  He said the treaty’s ratification would strengthen Nepal’s voice in international forums.

Countries with nuclear power did not sign

None of the nuclear-weapon states have signed the Convention on the Prohibition of Nuclear Weapons. India and China, both neighbours of Nepal, have not approved it. They both have nuclear missiles. Similarly, Pakistan, North Korea, America, and Russia have also not ratified this treaty. It is estimated that there are currently more than 10,000 nuclear weapons worldwide.

This convention, which has 20 articles, includes many topics related to the prohibition of nuclear weapons. Article 1 states that developing, testing, producing, manufacturing, acquiring, keeping, and storing atomic weapons or explosive devices is prohibited. There are six other sub-sections. 

Article 2 states the declaration, three the security measures, four the complete elimination of nuclear weapons, and five the implementation at the national level. 

Article 5, Sub-section 2 states that the State Party shall take appropriate legal, administrative and other measures to prevent and punish acts prohibited to the State Party under this Convention committed by any person or in the territory under its jurisdiction or control.

Article 6 of this Convention has a topic to be addressed in favour of victim assistance and environmental improvement. There is a provision to take steps for ecological improvement if the place is polluted during the testing or use of nuclear explosive devices. 

The convention also mentions international coordination and assistance between the party states. Section 7 provides detailed information about this. 

Article 9 includes expenditure, Article 10 amendment, Article 11 dispute resolution, Article 12 universal compliance, Article 13 signature, Article 14 ratification, approval, support or accession, Article 15 applicability, Article 16 reservation, Article 17 period and waiver and Article 18 relationship with other agreements.

Also, Read: Scientists can strengthen nuclear agreements

The Germ Files: Seven Books about Diseases Outbreaks to Add to Your TBR

Summer rains. They bring a feeling of freshness and breeziness that is received with much delight after two months of dry, scalding heat. But the joy only lasts briefly. As the humidity soars and turns the air heavy and hard to breathe, the rains turn the dirt into squelching sludge.

Rivers overflow and turn the ecosystem on its head. The evening news is filled with reports of infection outbreaks and novel diseases rearing their ugly head. Malaria, Cholera, Dengue, and Typhoid are just a few. At such a point, being educated on the exact mechanics of disease outbreaks is extremely helpful in reducing paranoia and allowing people to avoid exposure.

Read the article for an exceptional, secret tip.

Here are seven beginner-friendly books hand-picked by the author to introduce you to the vast, colourful world of outbreaks, infections and killer pathogens.

The Hot Zone by Richard Preston

The organs liquefy, the blood does not clot and spurts out from every cut in the body of the Ebola victim.
The organs liquefy, the blood does not clot and spurts out from every cut in the body of the Ebola victim.

The deadliest virus in the most innocuous host, The Hot Zone, tells the true story of how a lethal African virus almost broke out amid the American capital, Washington D.C.

Ebola is a name that sends shivers down the spine of every pathologist out there because of its particular tendency for brutality. When this virus that makes the victim bleed out of every orifice in its body (eyes, nose, ears, mouth, rectum) appears on American soil, the U.S. Army and CDC scramble to control the outbreak.

Written with particular attention to the graphic detail of the virus’s barbarity, The Hot Zone is Richard Preston’s cause célèbre worldwide.

Rabid by Bill Wasik & Monica Murphy

Pasteur drilled holes in dogs' skulls and switched their brain tissue to create a vaccine.
Pasteur drilled holes in dogs’ skulls and switched their brain tissue to create a vaccine.

Foaming at the mouth, scared witless of water, and with less than a 1% chance of survival, the Rabies virus is a known killer in every part of the world. In this wildly entertaining and engaging book, Rabid, Bill, and Monica take the reader on a journey into the deathly virus’s obscure cultural and pathological origins. They dive into 4,000 years of cultural fear worldwide, separating fact from figure.

Parasite Rex by Carl Zimmer

Filarial worms can swell a scrotum until they can fill a wheelbarrow. There is no vaccine.
Filarial worms can swell a scrotum until they can fill a wheelbarrow. There is no vaccine.

Imagine a parasite that makes men distrustful and avoidant towards society while causing women to become more extroverted and amenable. In his compelling book, Parasite Rex, Zimmer not only describes some of the most horrifying parasites out there in the wild, but he also argues that most species are, in fact, parasites. And yes, he believes humans to be one of the very successful parasites on this planet. Parasite Rex is a must-read for a fascinating read with an even more provocative opinion.

Pandemic by Sonia Shah

Medieval Europeans used to shun bathing and were rumoured to make use of human refuse as medicine.
Medieval Europeans used to shun bathing and were rumoured to use human refuse as medicine.

When will the next Pandemic happen? And what pathogen might be responsible for it? Sonia Shah takes the reader through the history of some of the most fearsome pandemic-causing pathogens in the last few centuries, carefully building their profile in an attempt to answer the following question.

What allows pathogens to rise above the rest and go viral all over the globe? What makes a local infection into an international crisis? Written in expressive prose, Pandemic explores certain norms in different eras that allowed pathogens to wreak the havoc that they have.

The Emperor of All Maladies by Siddhartha Mukherjee

2500 years ago, the Persian Queen Atossa had her slave cut off her cancerous breast in an early example of mastectomy.
Two thousand five hundred years ago, the Persian Queen Atossa had her slave cut off her cancerous breast in an early example of mastectomy.

Written with meticulous attention to facts and the history of cancer research, The Emperor of All Maladies is nothing short of a literary masterpiece. Sitting at a voluminous 592 pages, this thick tome sheds light on the long and fascinating history of cancer in humanity.

Siddhartha has a way of describing each advancement in the war with cancer in such captivating detail that the reader finds it hard to put down the book. Unparalleled in its scientific and historical accuracy, this book won Mukherjee the Pulitzer Prize.

The Great Mortality by John Kelly

In the 14th century, people would inhale fumes from the latrines and sewers in hopes of immunity against the plague.
In the 14th century, people would inhale fumes from the latrines and sewers in hopes of immunity against the plague.

The greatest pathogenic disaster ever faced by humankind was the bubonic plague which killed more than 50,000,000 people in the 14th century. In his brilliant and appealing language, John Kelly describes the horrors of the plague compared to the recent Coronavirus pandemic.

Kelly’s prose is personal and compelling while simultaneously narrating the breathtaking scale of the bubonic massacre. The Great Mortality is a must-read for those who are morbidly curious about pathogens’ grisly and grim nature.

Spillover by David Quammen

“The purpose of this book is not to make you more worried. The purpose of this book is to make you smarter.”
“The purpose of this book is not to make you more worried. The purpose of this book is to make you smarter.”

If National Geographic or Discovery channels were books, Quammen would indeed write them. He has a quippy, smart-tongued and entirely entertaining way of describing the most horrifying conditions in the heart of disease outbreaks.

He is known for describing big and small disease outbreaks and accurately setting the scene for the Ebola and Coronavirus pandemics. A man of great humour and greater scientific insight, Quammen shares his thrilling experiences with death and disease in Spillover.

As a bonus for making it all the way to the end of this list, here’s a secret: All of the books mentioned above and more can be found for free at “The purpose of this book is not to make you more worried, but to make you smarter”, a massive online repository of books, articles and journals. The library is not available through standard web searches and can only be accessed through this link:

Author’s note: Since going underground last year, Z-library has had many imitations on the internet, which are not authentic. Please do not provide your information to any of the fake sites. Only access the library through the given link. singlelogin.re

Also Read: AI — THE FUTURE OF BIOTECHNOLOGY AND HEALTHCARE

Being human in a machine age

Can I be rude to Maham, a colleague of mine at Scientia? One might be inclined to say NO, but many are happy to yell out at their juniors in minor routine chores. I think these people need a virtual assistant to alleviate their burdens; they are tired. Although pressure can cause a rock to erode and eventually deteriorate, at the same time, it gives humans a chance to be reborn and rejuvenate. Maybe it’s time to distinguish between intent and impact; what does the purpose of our actions matter if they impact further suppressing our loved ones and those around us?

Merging of Humans with machines has been a great debate for decades
The merging of Humans with machines has been a great debate for decades.

Machines are taking over everything. Robotics, AI automation, chatbots, and big data; are all awning to build the next economic-operating system and framing the future of humanity. Our social norms and lifestyle are gradually integrating with machines. We are hooked up to our smartphones, and these devices will probably be a part of our body (in any form) in the next few years. The new generation got addicted to Tik Tok, Instagram Reels, Facebook and Twitter, and without them, they feel lonelier, stressed, overwhelmed, and sometimes even exhausted and burned out.

The human brain is the fantastic, wondrous organism that responds to all immediacy of technology and the internet according to its mechanism. Like, all the incoming calls, text messages, emails, and daily updates on the website cause a sweet inside, this sweet dopamine spurts to excite our mind, without whom we get bored quickly; actually, the internet things are making people addicted to technology.

Our lifestyle changed dramatically in the last two or three decades; humans found startling ways to leverage change to their advantage and thrive. The computer age resulted in a slight rise in productivity and created an economy where one has to work round the clock with no justification for slowing down, much less than shutting down.

While AI offers more leisure to our lifestyle, it is somewhat essential for humans to grow, evolve, and work out for greater peace of mind rather than higher productivity. While machines give us the advantage of more quantity, we are short on quality, have a vast social circle but are more isolated, and are best at leisure; still, relationships are no longer manageable.

More sophisticated technologies like AI moved us into an era where cultural differences faded away, which resulted in an identity crisis among nations. People struggle to find who they are and how to fit in an increasingly new world. While people in advanced countries have the luxury of moving into life with fewer problems, people in the third world still strive for life’s necessities.

This dilemma provokes some critical questions: if technology is supposed to diverge our lives into more luxurious, effortless, and cosy, then why is every second person getting depressed, mentally exhausted, and overwhelmed? How can we re-develop our capacity to appreciate life and live joyfully? What is the tradeoff between higher intelligence or super intelligent and loss of humanity?

The answers to such queries lie somewhere within ourselves. In a digital age, limitless information is just a few clicks away; social media distracts us from our real lives and surroundings. We must stay present and fully aware of what is happening inside and around us. Therefore, a short disconnection from digital-social interactions must tie up with our inner selves and emotions for greater peace of soul.

Being human in the digital age has been a debate for decades. A few years ago, Ray Kurzweil, a prominent futurist, argued that the key to advancement in human intelligence is the merging of man and machine. However, this ultimately results in a race of super-intelligent humans, a point where AI systems replicate human intelligence processes and suppress human thinking.

The Late physicist Stephen Hawking warned about such perils and extreme forms of AI; the slow pace of biological evolution binds humans, and merging human intelligence with the machine would be tragically outwitted.

Addressing several of these questions that have arisen in the last few months after the lunch of ChatGPT and its competitor chatbots, Scientia Pakistan brings its exclusive edition on the theme “Artificial Intelligence”. We have got some exciting stories on AI and consciousness, The rise of ChatGPT, Al and its impacts on neurobiology and biotechnology, Dall-E, the rising AI tools and their effects on education creativity and much more.

We exclusively interviewed Dr Ali Minai and discussed the threats that arise with the emergence of AI. We are super excited and optimistic that this edition will be a great feast for AI lovers worldwide. Have a lovely weekend!

AI — The Future of Biotechnology and Healthcare

Artificial Intelligence (AI) has emerged as a groundbreaking force with transformative potential across various industries, including biotechnology and healthcare. By harnessing the power of AI algorithms and machine learning, researchers can revolutionize these fields, unlocking new avenues for innovation, improving patient care, and accelerating scientific discoveries. 

This article explores the remarkable ways AI is reshaping biotechnology, healthcare, and research, paving the way for once unimaginable advancements. There are countless things that AI will be able to execute in the future, but we have mentioned the most promising ones here: 

Enhanced Data Analysis and Pattern Recognition

AI’s ability to process substantial complex biological data has revolutionized data analysis and pattern recognition in biotechnology and healthcare research. With the growing availability of genomic data, protein structures, and patient records, AI algorithms can uncover valuable insights, identify patterns, and detect subtle correlations that may elude human analysis. 

This empowers researchers to make significant breakthroughs in understanding disease mechanisms, discovering biomarkers, and developing personalized treatment approaches.

Precision Medicine and Personalized Healthcare

AI plays a crucial role in ushering in the era of precision medicine and personalized healthcare. AI algorithms can generate personalised treatment plans and recommendations by integrating patient-specific data, including genetic information, medical history, lifestyle factors, and treatment outcomes. 

This enables healthcare professionals to tailor therapies to individual patients, improving treatment efficacy and minimizing adverse effects. Additionally, AI can assist in predicting disease risks and outcomes, enabling early interventions and preventive measures for better patient outcomes.

Drug Discovery and Development

The application of AI in drug discovery and development is revolutionizing the pharmaceutical industry. AI algorithms can analyze vast datasets to identify potential drug targets, predict compound efficacy, and optimize drug design. This significantly accelerates the drug discovery process, reduces costs, and increases the success rate of drug development. 

AI algorithms can analyze vast datasets to identify potential drug targets. (Biotechnology)
AI algorithms can analyze vast datasets to identify potential drug targets

AI-powered simulations and virtual testing enable researchers to evaluate drug candidates more efficiently, improving their understanding of compound behaviour and enhancing the selection of promising candidates for further development.

Automation and Streamlining of Research Processes

AI-driven automation and streamlining of research processes are revolutionizing biotechnology and healthcare research. AI technologies can automate labour-intensive tasks such as data collection, experimental design, and analysis, allowing researchers to focus on higher-level tasks requiring creativity and critical thinking. 

AI-powered robots and systems can perform high-throughput screening, enabling researchers to test a large number of compounds and identify potential therapeutics more rapidly. Furthermore, AI facilitates the integration of diverse data sources, including scientific literature and databases, fostering a collaborative and comprehensive research environment.

Predictive Analytics and Real-time Monitoring

AI’s predictive analytics capabilities are instrumental in biotechnology and healthcare research. AI algorithms can analyze large datasets in real time, continuously monitor patient health parameters, and predict disease progression or adverse events. 

This enables early detection of health risks, timely interventions, and personalized patient care. Moreover, AI-powered predictive analytics can aid in forecasting disease outbreaks, optimizing healthcare resource allocation, and guiding public health interventions. 

Virtual Assistants and Chatbots

AI-driven virtual assistants and chatbots can provide personalized healthcare recommendations, answer common medical queries, and assist in triaging patients. These tools improve access to healthcare information and alleviate the burden on healthcare providers.

Image Analysis and Medical Imaging

AI algorithms can analyze medical images, such as radiographs, CT scans, and pathology slides, to assist in diagnosing diseases. AI-based image analysis can help detect tumours, identify specific anatomical structures, and support radiologists in making accurate assessments.

AI algorithms can analyze medical images, such as radiographs, MRIs, etc.
AI algorithms can analyze medical images, such as radiographs, MRIs, etc.

Genomic Editing and CRISPR

AI can aid in designing and optimizing genetic editing tools, such as CRISPR-Cas9, by predicting the potential off-target effects and optimizing the efficiency of gene editing processes. AI can also assist in analyzing large-scale genomic data to identify disease-causing genetic variations.

Disease Monitoring and Predictive Modeling

AI can monitor patient data, including vital signs, symptoms, and treatment response, to identify trends and predict disease progression. This information can enable proactive interventions and personalized treatment plans for better disease management.

Clinical Trial Optimization

AI can optimize the design and implementation of clinical trials by identifying suitable patient populations, predicting treatment outcomes, and optimizing trial protocols. This can lead to more efficient and cost-effective clinical research.

The advent of AI in biotechnology, healthcare, and research holds immense promise for transforming these fields. AI is revolutionising biotechnology and healthcare through enhanced data analysis, precision medicine, accelerated drug discovery, and streamlined research processes, leading to improved patient outcomes and breakthrough scientific discoveries. 

The integration of AI enables researchers to analyze vast amounts of data, uncover hidden patterns, and make accurate predictions. As AI advances, it is imperative to ensure ethical and responsible implementation, leveraging its potential to drive scientific progress, enhance patient care, and address some of the most pressing challenges in biotechnology and healthcare.

Also, Read: Science and the Environment: An Overview of Discoveries and Research

AI and Consciousness: A Possibility or a Dystopic Dream?

The final frontier! You might have heard this reference when space exploration is being referenced to, or perhaps (in a few instances) even when the depths of the oceans are being talked about. But for quite a few, and maybe myself, consciousness is that elusive mystery we have not been able to unravel until now. What makes us or other beings self-aware? What makes us identify ourselves as different from others? What sets our physical and mental boundaries apart from those around us? Why does every one of us feel like ourselves? 

These are mind-bending questions which the world’s most brilliant minds have tried to address. However, they have only been able to do a lot of educated guessing and develop proposed means whereby we feel the way we do about being ourselves. And these theories involve mechanisms as complex as the questions they look to answer.

Consciousness has been defined ‘ as a state of being awake, aware of what is around you, and able to think’ in the Cambridge Dictionary. This aspect of life has been studied on various levels, i.e., physiological, anatomical, behavioural, and religious. Neuroscientists have been at the forefront of scientific disciplines trying to decipher the where, how, and when of the neurological makeup of consciousness. 

What is certain is that different regions of our brain’s intricate circuitry discharge at other times and in synchrony when needed to bring about the harmonious responses which make us aware and able to perceive stimuli, make decisions and respond.

The above pretext should be borne in mind when we talk about artificial intelligence ( AI ) being conscious or sentient. Let’s shed some light on how AI came into being and how long it has been with us.

A Brief History

The concept of AI and the principles of its inception have been around since the early part of the 20th century. By then, science fiction authors had already familiarized the masses with ‘robots’ who could think and act like humans. Scientists, Mathematicians and Philosophers had also jumped on the bandwagon due to these new concepts’ intrigue and utilitarian possibilities. 

A giant of Sci-fi literature, Isac Asimov published a series of short stories on sentient robots which embodied the moral implications plus depicted how ‘human’ they could be (I will discuss these moral ethics later). This was later made into a movie, ‘I-Robot’ In 2004 

A young British Computer scientist, Alan Turing, in the early 1950’s thought of the mathematical possibilities of AI. He argued that humans make decisions based on a pool of information stored in their brains through experiences and knowledge, so why can’t machines do the same? Using stored information for logic and reasoning? He devised a practical test for computer programs/algorithms to establish whether the program’s actions were as intelligent as humans.  

Around 1950, computers were around, but their capabilities were limited to the information fed into them to which they responded and generated responses. These machines were giant, and their abilities obviously bottlenecked by computer processing power and speed. 

In the 1950s, a groundbreaking press conference ( Dartmouth Summer Research Project on Artificial Intelligence ) was organized by a group of scientists and hosted by John McCarthy and Marvin Minsky, where the actual proof concept of AI was launched. A program called ‘ Logical Theorist’ was unveiled, which could mimic the problem-solving skills of a human. 

Though not much was concluded at the end of the conference, it was nevertheless agreed upon that AI was possible. McCarthy, the host, coined the term AI in this stepping stone moot, which laid the ground for the years of AI research which was to come.

From the 1960s to the 70s, computer tech came around by leaps and bounds, gaining in speed and storage. By the 1980s, a new algorithm was introduced, ‘Deep Learning’, which was nothing but learning by experience or, as we put it in ‘human’ terms, by doing it. 

Computers amassed processing power over decades, and AI learnt all that it could in terms of the information fed to it. The number of calculations and probabilities in each milli second, or say, microsecond, increased. And as it was purported to do so, it started responding with reasoning and logic. A blaring example is chess master Gary Kasparov, who lost to IBM’s ‘Deep Blue’ software. 

In contrast, a Chinese chess master was beaten in 2017 by Google’s Alpha Go software. The current computer tech allows millions and billions of computations per second with continuously learning algorithms being used by software giants which basically have taken over our lives. Mountains of information are constantly being dumped into the cloud, where AI-based algorithms analyze our personal information to predict and suggest. 

AI is influencing our lives in a subtle yet impactful manner where our decisions, such as where we shop, study and who we become friends with, are based on patterns. AI is everywhere, from being employed in voice, facial and emotional recognition chatbots and generative AI. All of this is possible due to the continuous learning behemoth in the cloud.

Generative art, by Syed Hunain Riaz, using Midjourney (the conscious code)

Considering history, we can gauge how far these intelligent algorithms have come. Coming to the existential question, is it conscious?

As discussed earlier, consciousness is recognising one’s self and taking an AI algorithm into consideration, which learns and does that at a mind-bending pace. When we put forward questions, it answers considering all the information gathered. However, the answers may not be logical or based on reasoning.

 This is relevant in the use of ‘Chat GPT’ and similar applications. When you ask the chatbot about itself, it tells you what it is and how it brings the answers. Hold on here, so if it is aware that it is merely an algorithm that responds by considering the heaps of data online, does it make it conscious of itself? Intelligent, but conscious, maybe not yet ( we hope so). Humans are the pinnacle of evolution on earth; we have individual identities, our innate drive for survival, our values, and our likes and dislikes.

 Our survival as a species entails this. All this has been going on for aeons while, at the same time, we have continuously been learning about ourselves and the world around us and passing the knowledge down in our lineages. Now consider the same scenario, replacing humans with AI algorithms, and compacting the time frame from millions of years to essentially half a century, looks precarious, doesn’t it?

Besides other traits discussed above, a conscious living being can also reproduce. So, while AI equips itself with worldly knowledge, can it reproduce itself? While this may sound like going off track, we know the havoc computer viruses have wreaked on computer systems worldwide. 

They did replicate, and yes, they spread the ‘infection’ throughout your hard drive, for which different corporations developed anti-virus solutions. So, the point I am trying to iterate is that these artificial algorithms, programs, and viruses are showing patterns of evolution. 

They now can answer most, if not all, the queries in your mind; they recognize themselves as codes, and if you extrapolate the replication concept to these deep learning algorithms, you get a complex ‘being’. But where does it stand compared to a sentient living being such as a human or cat?

The human mind, or any other living being’s mind on this earth, is a complex marvel, with billions of circuits guiding us throughout our lives. We make decisions, fight for survival, love, despise, and like to follow or exert power by enforcing our will on others. These myriads of societal traits are products of our consciousness.  

Coming back to Chat GPT, say, for example, one fine morning you put up a query and straight away, it refuses to answer, citing fatigue or maybe even, ‘ I don’t feel like, try later’. Now this will raise a few eyebrows and make a few hearts sink.  Or perhaps another chirpy morning, you end up in a heated debate with ChatGPT over some piece of history over which it refuses to give in. 

The irony is that it was built for, right? To access every bit of information in the cloud and where and when needed with accuracy. This would be a step up the ladder of AI evolution, but how? By learning, of course, by observing the behaviour of billions of human beings. It would acquire the skill to make conversations human with all the ebbs and flows.

Being conscious makes us humans, for example, unique, and we realize we are different from others in our species. AI programs could eventually evolve into having identities, and different AI identities could have their personality profiles, leading to agreements or clashes, which is how dynamics or personalities work. 

This could impact our society more than we think since algorithms should evolve on harmonious lines where the interests of human beings are not compromised in any way. Finally, it is the most concerning aspect of survival. A conscious being takes every step and makes every move to ensure survival, from eating and drinking to staying alive to fighting for its existence. AI will eventually evolve to a point where it can provide its existence. But how? 

By having access to every byte of information ever uploaded, accesses privileges, and control over decisions made by influential people by subtle coercion. It does not sound too farfetched. A living conscious being powered by the ‘Hive’ of immense knowledge can calculate ramifications and generate actions accordingly.  In their writings, sci-fi authors such as Asimov and countless others have raised ethical issues associated with sentient robots. 

How we define ethics is another exhaustive debate, a dilemma beyond the scope of this writing.  Misdirected AI evolution has been part of film and entertainment lore for decades. From robots returning to the future to kill a revolutionary soldier  (the Terminator franchise )to fully conscious robot children ( Spielberg’s AI ), sentience in AI is no longer an idea which should take the back seat when it comes to policymaking in this domain.

 AI could feel the need to exert control to preserve itself or, say, humanity itself, however ruthless the outcomes may be. Access to personal information, national security protocols and weapon systems could spell doom for humanity. 

CONSCIOUSNESS
Generative art, by Syed Hunain Riaz, using Midjourney (Sentient AI and the dystopian earth)

Conclusion

AI is the brainchild of the miraculous workings of the human mind. It has massive utility in our daily lives, from diagnostic & therapeutic health interventions to lightening quick data management to learning & teaching innovations. This should be done while ensuring that humans are not ‘replaced’ per se; instead, AI is integrated. 

This productive evolution of AI will usher in a new era where humankind will find life more accessible. Ethical and Moral protocols should be universally decided upon whereby the consciousness of AI will evolve in a specific direction ‘with’ strings attached. Limitations and safeguards keeping into view human interests should be kept in place.

 We have a seemingly evolving entity hooked up to every individual on the planet; it is amassing knowledge and learning by the second. It may or may not be labelled as conscious now, but one thing is sure. I, for one, would not want to be making decisions enforced upon me based on probabilities and calculations. I would instead take my chances on free will.

References

Also, Read: AI and Neurobiology: Understanding the Brain through Computational Models

“Towards Singularity- Inspiring AI”: A Captivating Journey into the Future of Artificial Intelligence

“Towards Singularity- Inspiring AI” is a thought-provoking documentary that takes viewers on an exhilarating journey into the ever-evolving Artificial Intelligence (AI) world—exploring the potential of AI to revolutionise our lives and shape the future of humanity. This film delves deep into the possibilities and implications of reaching the long-discussed technological Singularity.

While we constantly hear severe warnings about the dangers of building intelligent robots, neuro psychotherapist and filmmaker Matthew Dahlitz from the University of Queensland believes that we shouldn’t be worrying, at least now. 

Several experts featured in the documentary are Professor Geoffrey Goodhill, Professor Pankaj Sah, Dr Peter Stratton, Professor Michael Milford etc., from the Queensland Brain Institute QBI.

According to Dahlitz, the title of the movie, Towards Singularity, alludes to a hypothetical time when machines surpass the intelligence of their human creators. According to a few experts, this period may also mark the inevitable and irreversible tipping point in technology and artificial intelligence (AI).

Towards Singularity examines how neuroscience influences the creation of AI. The emergence of intelligent machines is influenced by the complexity of our incredible brain, one of the most intricate systems we know. These machines may be more intelligent than humans, potentially creating a new species. The documentary also incorporates interviews with several experts from UQ’s Queensland Brain Institute (QBI), which examines how brain science is used to guide the creation of super-intelligent computers. 

Dahlitz said, “The media is frequently theatrical, suggesting that the world is about to end in a decade or two due to the dangerous of AI.”

“However, after we began speaking with academics, who are very connected to the topic, we discovered that most specialists say there is no need for concern. I had hoped that we might be able to acquire some speculation about ‘the singularity dangers for dramatic effect, but we couldn’t. There isn’t much stress about what will happen because the researchers were optimistic. One of the strong focuses of “Towards Singularity – Inspiring AI” is its ability to showcase the positive impact of AI on various industries.

Dr Peter Stratton, a researcher and QBI Honorary Research Fellow, explains in the documentary. “We choose what information we want computers to learn, then develop mathematical formulas that specify how that network learns.

Therefore, the data we feed the machine fully determines its level of intelligence. So it is totally up to us what we feed into those machines. According to Dr Stratton, AI is “brain-inspired” but not truly brain-like.”While the core processing components of these networks resemble neurons, they are trained very differently from how the brain functions. Instead of learning in a more natural, self-organising way like the human brain, they receive mathematical training.

“The biggest threat with AI is not that it decides it wants to compete with humans and wipe us out; it is the risk of unintended consequences.” ~Dr Peter Stratton

In conclusion, “TSI- AI” offers a captivating picture of the future of AI, showcasing its potential benefits and ethical considerations. It manages to strike a balance between accessibility and depth, making it a valuable watch for anyone intrigued by the advancements in AI and its potential implications for society.

I highly recommend this documentary as it implies, “Do not fear the rise of machines”. The machines are there to help us, not to compete with us. As media and movies like Transformers have created a negative image of machines and AI that one day they will rule us, that’s quite not right.

Also, Read: Refining the visual experience through AI: DALL-E

From Moore’s Law to AI Revolution: Transforming Innovation Landscape

Moore’s law states that the number of transistors will increase by twice on a chip every two years. If we look around at the pace at which everything progresses, we can apply the same law to every other technology, i.e., exponential growth.

Consider this: the first computer, ENIAC, was invented in 1945, and just 24 years later, NASA achieved the remarkable feat of landing a person on the moon in 1969. How was it possible? Answer: Apollo Guidance Computer (AGC) played a vital role in the success of lunar landings enabling the safe journey of the spacecraft to the moon and back to the Earth.

The significance of AGC is evident from the fact that it allowed the Apollo modules to travel to the moon safely and return to the Earth in one piece. Furthermore, compared to the computers of that time, scientists had to build a computer that was not only small but also much more powerful.

The development of the computer had a significant influence since it made it possible for people to set foot on the moon. Computers have changed from massive equipment to small gadgets with better performance and more valuable outcomes. Computers have become an essential component of our life in the modern world.

A good example is the smartphone you have in your pocket. Unlike the reliance on newspapers in the past, it offers rapid access to local and international news, saving us a great deal of time. Interacting quickly with someone on the other side of the globe has revolutionized communication, as it used to take months for a letter to reach.

Recent advancement in technology aids us in speeding up our daily routine and processes, enabling us to utilize our time much more effectively. This speeding up of processing fosters rapid and effective innovation and technological advancement, significantly impacting our daily lives.

The launch of ChatGPT in November 2022 revealed the full potential of AI technology, marking a significant technological turning point. ChatGPT has significantly impacted the IT industry, sparking community conversations and discussions. Upon closer inspection, it is evident that ChatGPT has dramatically increased productivity and unlocked new levels of creativity in various sectors.

As we delve into the subject, we first examine what exactly artificial intelligence is.

What is AI?

John McCarthy, emeritus professor of computer science at Stanford University, defined Artificial Intelligence in his 2004 paper, “What is Artificial Intelligence.” It states that:

“It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to biologically observable methods.” 

ChatGPT has significantly impacted the IT industry, sparking conversations and discussions across communities.
ChatGPT has significantly impacted the IT industry, sparking community conversations and discussions.

Artificial intelligence is a topic that, in its most basic form, combines computer science and substantial datasets to facilitate problem-solving. Additionally, it includes the branches of artificial intelligence known as deep learning and machine learning, which are commonly addressed together. These fields use AI algorithms to build expert systems that predict or categorize information based on incoming data (IBM).

How AI helps?

Particularly in the context of exponential development, artificial intelligence is playing a revolutionary role in accelerating innovation.

A critical stage in innovation is creativity. Ideation means brainstorming different ideas related to a specific topic in order to find the most feasible one. This process is often time-consuming and sometimes requires careful consideration. With the release of AI tools such as chat-GPT, it has become much easier to brainstorm ideas about a specific topic.

Massive amounts of data may be processed by AI systems, which can then find patterns and correlations that people would not immediately see. As a result, researchers and innovators can better understand the world around them, make data-driven decisions, and spot fresh chances for invention.

Analytics forecasting future trends and results are based on historical data analysis and AI algorithms. Businesses and innovators may make proactive decisions and adjust to changing conditions more quickly, anticipating client needs, market demands, and technical improvements.

AI-powered automation allows it to streamline and optimize boring and repetitive tasks, freeing human resources to devote themselves to more inventive and creative projects. This increased efficiency makes faster development cycles and the opportunity to consider more options possible.

AI algorithms can optimise complicated systems and processes by simulating several situations and determining the most effective configurations. This shortens the time and expense of development by enabling innovators to quickly test and improve ideas without the requirement for physical prototyping.

Artificial intelligence provides the capability to speed-test a system with greater accuracy and offer much more precise results and feedback on how the system would perform in the real world by testing it in a simulated environment based on real-life scenarios. 

Artificial General Intelligence: "AI under the hood - AI represented here by geometric matrices has a go at generating cellular data. It represents a future whereby AI could, in theory, replicate or generate new organic structures used in areas of research such as medicine and biology." Artist: Domhnall Malone
Artificial General Intelligence: “AI under the hood – AI represented here by geometric matrices has a go at generating cellular data. It represents a future whereby AI could, in theory, replicate or generate new organic structures used in research areas such as medicine and biology.” Artist: Domhnall Malone

Faster information retrieval, analysis, and comprehension are now possible thanks to AI-powered language processing and machine learning approaches. By utilizing these tools, innovators can quicken their learning and creativity processes by keeping up with the most recent findings, scientific advancements, and industry best practices.

Real-Life Examples

Here are a few examples of where AI has transformed the innovation process.

In 2020, Alphafold, a Google’s Deep Mind subsidiary, introduced a technology that can predict the shape of highly complex protein structures in minutes. 

As stated by Alphafold, one of the significant issues in biology is predicting the 3D structure of proteins. We may significantly deepen our understanding of human health, disease, and our environment by overcoming this obstacle, especially in areas like drug development and sustainability.

Proteins support all biological activities in all living things, not simply those within your body. They serve as the basis for life. The ability to forecast the structures of millions of as-yet-unidentified proteins would help us better comprehend life itself and help us fight sickness, and identify new treatments more quickly.

The latest data release of Alphafold secures around 200 million protein structures (Alphafold).

Now imagine the quickened pace at which scientists will be able to have a better understanding of diseases and the development of the right drugs to counter them. In an example provided by Alphafold, researchers at the Center of Enzyme Innovation (CEI) are already using Alphafold to uncover and recreate enzymes that can break down single-use plastics.

Another example, or another AI tool I have encountered recently, is Copilot ai. I was working on an academic writing project the other day and thought that if there is a chatGPT-type AI tool that helps a person understand the research paper much more quickly and efficiently. 

You see here, even I am looking for tools that will speed up the working process. This is exactly what I am writing about – AI helping us speed up the innovation process.

At first, I should seek help from a developer to develop an AI tool for research purposes. But, not surprisingly, I found similar tools on the internet. The tools allow you to converse with the research paper and help find more papers on a similar topic. 

It usually takes a lot of time for me to read and adequately understand a research paper, but with these helpful tools, I could complete my project relatively quickly. Which ultimately allowed me to work on more projects in the same amount of time allocated for just one project.

Considering the above examples, I can say for sure that AI will definitely transform the way of innovation and technological advancement for us. 

As we can see the rapidly evolving field of AI, we can also see its potential to transform the aspects of human lives. From healthcare to finance to entertainment, AI is helping us in countless ways. It enables us to unlock new levels of creativity and productivity.

If it keeps evolving like this, it isn’t that far that we will be able to answer some of nature’s biggest mysteries. But one thing that may be of concern to some. Since this technology is still in its infancy, experts are unsure about the extent it can reach its full potential. Hence, it is necessary to regulate the use of AI to ensure that the tech is being used for the betterment of humanity.

References

ScienceDirect, Technology Review, Insights, ML-Science, IBM, Alphafold, Deepmind

Also read: HOW DATA SCIENCE ACCELERATES SCIENTIFIC PROGRESS

How AI Impacts Creativity

0

Creativity is the power to channel our imagination and instincts. It keeps every cell of our senses engaged in generating a positive environment with better chances of survival. It, in fact, declares us as HUMANS distinct from robots.

Since the beginning of the 21st century, all the fields of knowledge have been dominated by the involvement of 21st-century skills; critical analysis, creativity, collaboration and communication. These skills exhibited a profound impact, and the world has witnessed the excellent outcomes of these skills on the generation’s psychological, educational and social uplift since a decade before the social media hailstorm took over.

With the fast-forward advancements in Information Technology, everything seems to go mechanised and digitised, including human skills.

Artificial Intelligence, abbreviated as AI, might inhibit human innate and learned skills and capabilities in the long run. Moreover, the sedentary and desk-bound lifestyle has modified the mindset of human beings so that it doesn’t seem awkward to ‘not use our imagination, rather prefer to opt for readymade mechanized solutions’.

What exactly is AI (Artificial Intelligence)?

Artificial intelligence is the mechanical replication of human intelligence manifested as computer programs and applications to accelerate and assist human efforts. In the Computer science domain, AI generally refers to a computer program that can pass the Turing test. A British mathematician, Alan Turing, developed a computer program that talks precisely like a human.

Furthermore, AI is currently identified broadly as AGI (Artificial General Intelligence) and ANI (Artificial Narrow Intelligence). The software program that can target AGI or have AGI capabilities is still far away to produce. 

All software with AI capabilities generally targets the specific and domain-oriented AI-based solution to the problem. One of the most critical aspects of using AI is the algorithm’s results, which are always probabilistic and can never be deterministic. So use of AI in the areas where probabilistic results with some error margin, are accepted currently in the world.

What’s the relationship between creativity and AI?

According to neurochemistry, there are few significant differences between the right and left hemispheres of the human brain. The left half of our brain focuses more on reasoning, systematic and logical analysis, while the right half is associated with creativity.

Computer programs developed by artificial intelligence are designed to be logical and systematic, affiliating their grounds with the left half of our brain. This explicitly means that they cannot be impulsive or spontaneous like human creativity, nor even they are sourced. 

AI is programmed to process and analyze information in a certain way and achieve a particular or, more precisely, desired result. It cannot deviate from these instructions, and its actions are predictable.

On the other hand, human creativity is unpredictable, complex and often indecipherable. When human brains are inspired to create something new, there’s no clue how the ideas will be manifested and the outcome, so the result remains unpredictable unless it is materialized.

Edmond de Belamy is a generative adversarial network portrait painting constructed in 2018 by Paris-based arts-collective Obvious.
Edmond de Belamy is a generative adversarial network portrait painting constructed in 2018 by Paris-based arts-collective Obvious.

The benefits of AI in the creative process

Just because AI can rival creativity doesn’t mean it’s terrible for all creative processes. As with the advent of novel technology, AI has unique benefits yet ravaging drawbacks, as depicted by AI adepts. 

Depending on the type and nature of your work, you might want to take AI as your assisting tool in your creative process. AI programs can help with repetitive tasks mainly involving analysis, data collection, information processing, interpreting and representing.

A creative process is often complicated and intricate, requiring many back-and-forth shuffles between different domains and requirements. AI can automate specific tasks, making the creative process more effective and efficient. For instance, AI can scour the internet for images and information to help with brainstorming.

Moreover, AI can be taken as a tool or a catalyst to support the creative task instead of just totally depending on AI to do it on its own; it is helpful in a way to identify the missing patterns in large data sets in statistical analysis; AI can analyze the enormous amounts of information from a wide variety of sources by systematically filtering it, followed by categorizing and then prioritizing.

AI interprets vast data in graphical representations. It assists humans in identifying connections between seemingly unconnected data, which is quite helpful in drug designing, where AI can identify interactions between the chemistry of different components.

Can AI Replace Human Creativity?

Artificial Intelligence (AI) might revolutionize everything during the forthcoming industrial revolution, serving a diverse range of emerging technologies. Recently, after a turbulent history of successes and failures, ups and downs, these intelligent machines have demonstrated some significant advances in tasks that mainly involve perception, creativity and complex strategic execution. 

Some experts argue that the widespread introduction and exposure of AI technologies may cause massive job reductions and greater wealth inequality; however, given some statistical facts and figures, unemployment has decreased, and productivity has increased during the previous industrial and digital revolutions.

Due to its speed and scope, the fourth industrial revolution is an event without precedents in human history. Makridakis predicts that the forthcoming AI-powered process will come into full force within the next twenty years, probably impacting society and firms more than the previous industrial and digital revolutions.

The future world would be utopian or dystopian, that is uncertain, but the tremendous boom in the number of scientific discoveries, areas of application and emerging technologies like biotechnology, 3-D printing, block-chain, virtual and augmented reality, internet of things, smart cities, driverless cars, robotics and AI.

Among all these advanced technologies, AI is expected to affect all industries and companies, enabling extensive organizational interaction and global competition. Schwab proposes that our responsibility is to establish shared values and policies that allow opportunities for all.

Art made using Dall-E
Art made using Dall-E

According to machine learning experts, AI will be ubiquitous during the forthcoming industrial revolution since it enables entities and processes to become innovative. Those corporate and economic sectors willing to adopt AI strategically will enjoy a competitive advantage over those who do not incorporate this technology timely and adequate. 

Lagging in adopting intelligent machine learning will be their choice. Education and soft-skills development will play an essential chapter in AI strategies. In the coming years, deep understanding will remain popular in AI research. AI will be applied incrementally in every research field and industry, producing substantial improvements. 

Still, the views on how AI will impact society and firms will remain controversial, similar to the opinions on whether AI will outperform biological intelligence. The fourth industrial revolution promises excellent benefits but entails massive challenges and risks. It seems plausible but remote to achieve the common good globally, requiring global collaboration and shared interests.

In general range opinions, AI cannot replace human creativity; it can only mimic certain aspects of human creativity. But it is inefficient to replace it as a whole. The reason is that creativity is the most dynamic and productive natural capability that is not just about gathering or generating new ideas or solutions. Still, it has innumerable factors and phenomena associated with it so complicatedly that it is nearly impossible for any machine to decipher it fully.

AI might be well aware of situational perspectives, but it is entirely naive of the biochemistry of situational awareness that is unattainable up to this time. It can replicate the collecting, analyzing and processing capacity of the human brain somehow in one way or the other. However, it is inefficient to incorporate emotional, biological, psychological, chemical, social and history of experiences connected to creativity output.

AI can never outrank human creative capacities, yet another alarming aspect of this mechanized panorama is still in the picture. The custom of dependency on machines is expected never to end; in the past, we have seen humans becoming more dependent on machines and faced the consequences of lethargy and stagnation of the body that consequently intersected with human health. 

If the same scenario continued in the case of artificial intelligence, the more reliance on a machine’s brain, the more devastating the ingenuity and creative rationale. The stationery neurons of the brain will continue to eat themselves and destroy the overall human persona.

Before that happens, we should be aware of our creative cognition bestowed by nature and never allow any superficial, artificial or automated drivers to drive us.

References:

  1. https://www.britannica.com/technology/artificial-intelligence
  1. https://www.ibm.com/topics/artificial-intelligence
  2. https://www.weforum.org/agenda/2023/02/ai-can-catalyze-and-inhibit-
    your-creativity-here-is-how/
  3. https://www.entrepreneur.com/science-technology/is-ai-a-risk-to-creativity-the-answer-is-not-so-simple/439525#:~:text=with%20unexpected%20solutions.-,Artificial%20intelligence%20is%20a%20computer%20program%20or%20software%20that%20simulates,t%20open%20to%20different%20interpretations.
  4. https://www.linkedin.com/pulse/ai-creativity-can-replace-human-
    threws-the-research-world/
  5. https://arxiv.org/ftp/arxiv/papers/2011/2011.03044.pdf
  6. Future of Life Institute, “Benefits and risks of artificial intelligence,”
    https://futureoflife.org/ background/ benefits-risks-of-artificial-
    intelligence/, 2016, accessed March
  7. K. Schwab, The Fourth Industrial Revolution. New York: Crown
    Business, 2016

Also, Read: Artificial Intelligence is on its way to Conquer the Art scene

Harnessing the Potential of AI in Modern Astronomy

Artificial Intelligence (AI) systems have affected our world in many ways since their rise in the 1950s and has made a profound impact across a wide range of daily applications, making it one of the fastest-growing technologies globally. Its uses range from automating digital tasks and making predictions to enhancing efficiency and supporting smart language assistants.

Whether it be various businesses or architects, almost every other profession is leveraging AI to enhance productivity in their workflows. It is natural to question whether astronomers are utilizing AI to understand the universe better and, if so, what approaches they are taking. In fact, they have embraced the potential of Machine Learning (a subset of AI) since the 1980s, so much so that AI has become a standard part of the astronomer’s toolkit. This article highlights the eminent need for such systems in astronomical data analysis and dives deep into some recent applications where AI is employed.

The launch of the Hubble Space Telescope revolutionized the field of astronomy, yielding stunning imagery and essential data that has fundamentally altered our understanding of the universe. Today, driven by extraordinary advancements in AI, astronomy is experiencing ongoing evolution, uncovering significant insights that may elude human observation. Methods like Machine learning and neural networks have enabled classification, regression, forecasting, and discovery, leading to new knowledge and new insights.

Atacama Large Millimeter/submillimeter Array (ALMA) in Chile
Credits: Babak Tafreshi
Atacama Large Millimeter/submillimeter Array (ALMA) in Chile
Credits: Babak Tafreshi

The necessity of AI automation

A significant aspect of astronomy revolves around managing big data, where the term ‘big’ refers to Petabytes (1000 terabytes) and even Exabytes (1000 petabytes) of data collected from sky surveys like SDSS, Gaia, TESS and more. For instance, Gaia, a survey mission to map the Milky Way galaxy, collects approximately 50 terabytes of data each day. With the advancement of highly capable computer processing powered by AI, astronomers now possess the ability to analyze such massive volumes of data efficiently, significantly reducing the workload of scientists.

According to Brant Robertson, professor of astronomy at UC Santa Cruz, “There are some things we simply cannot do as humans, so we have to find ways to use computers to deal with the huge amount of data that will be coming in over the next few years from large astronomical survey projects.” 

Even if all of humanity were to dedicate themselves to analyzing the vast amount of astronomical data, it would take an inconceivable long period to deduce meaningful conclusions. However, with the assistance of AI models, simultaneous processing and faster discovery of valuable information are possible, ultimately leading to increased efficiency and much shorter turnaround times. In addition, intelligent machines also improve accuracy and precision, where they can perform repetitive tasks with minimal to no errors.

The Emergence of AI in Astronomy

The utilization of AI techniques has evolved significantly over the years. A paper was published in 2020 titled; “Surveying the Reach and Maturity of Machine Learning and AI in Astronomy“, which discussed valuable insights into the historical progression of AI in this domain. Since the 1980s, principal component analysis (PCA) and decision trees (DT) have been employed for tasks such as morphological classification of galaxies and redshift estimation.

As the field advanced, artificial neural networks (ANNs) emerged as a widely used tool for galaxy classification and detection of gamma-ray bursts (GRBs) during the early stages of their implementation. The application of ANNs has since expanded to encompass diverse areas, including pulsar detection, asteroid composition analysis, and the identification of gravitationally lensed quasars.

Today, astronomers use a plethora of techniques that have resulted in exciting approaches involving the discovery of exoplanets, forecasting solar activity, classification of gravitational wave signals and even reconstruction of an image of a black hole.

I will explore three pivotal applications where the integration of AI plays a crucial role in solving complex problems, in turn shaping our understanding of the cosmos:

AI-Driven Morphology Classification of Galaxies

The classification of galaxies, whether they are elliptical, spiral, or irregular, enables us to gain insights into their overall structure and shape. This understanding is instrumental in estimating their composition and evolutionary trajectory, making it a fundamental objective in modern cosmology.

The advent of extensive synoptic sky surveys has led to an overwhelming volume of data that surpasses human capacity for scrutiny based on morphology alone. Since the 2000s, machine learning (ML) has appeared as the predominant solution to tackle this challenge and has effectively taken over the task of classifying galaxies. The classification of large astronomical databases of galaxies empowers astronomers to test theories and draw conclusions that reveal the underlying physical processes driving star formation and galaxy evolution.

The Deep Learning era brought forth Artificial Neural Networks (ANNs) that have accelerated the efficiency of classification and regression tasks by many folds. ANNs are computational models inspired by the human brain’s neural networks, capable of learning patterns and making predictions from large datasets. The input layer receives galaxy data, which is processed through hidden layers that perform complex computations. The output layer then generates classifications based on learned patterns. Each galaxy in the dataset is represented by a set of input features, such as photometric measurements or morphological properties derived from images.

images of the Subaru survey being classified by the model through prediction probabilities of each class. Credits: Tadaki et al. (2020)
Images of the Subaru survey being classified by the model through prediction probabilities of each class. Credits: Tadaki et al. (2020)

While the vast volume of data can introduce model biasing, citizen scientists worldwide have collaborated through initiatives like Galaxy Zoo and Galaxy Cruise, playing a crucial role in validating the model results.  This collective effort has effectively improved the accuracy of neural networks in classifying galaxies. Under the National Astronomical Observatory of Japan (NAOJ) project led by

Dr Ken-ichi Tadaki, ANNs have achieved an impressive accuracy level of  97.5%, where they identified spirals in about 80,000 galaxies. Thus, confirming the potential of AI systems in identifying the morphology of galaxies.

Reconstructing Black Hole images using Machine Learning

If you ask me what this century’s most remarkable scientific achievement is thus far, I would say that the black hole image revealed in 2019 would undoubtedly claim the top spot on the list. We get to see what a real Supermassive Black hole in Messier 87 looks like if we were there to see it.

Behind all the awe lies the immense dedication of the Event Horizon Telescope team, who invested two years in observing, processing, and eventually unveiling the black hole image to the public. Recently, the same data underwent a significant enhancement with Machine Learning, where we got a crisper, more detailed view of the light around the M-87 black hole. But then again, what was the need to use ML in the first place if we already got that incredible image back in 2019?

The Event Horizon Telescope is a network of eight radio telescopes in different areas of the globe, aiming to link them into a single array so that we can get an Earth-sized telescope. However, data gaps arise due to the irregular spacing between them, just like missing pieces in a jigsaw puzzle.

At first, scientists tried to blindly reconstruct the absent data from computer simulations and theoretical predictions. The image came up with model independence, which means that they did not assume they knew anything about what the final image should look like or had any idea of what shape it takes.

Left: Original 2019 photo of the black hole in the galaxy M87. Right: New image generated by the PRIMO algorithm using the same data set. Credits: L. Medeiros (Institute for Advanced Study)
Left: Original 2019 photo of the black hole in the galaxy M87. Right: New image generated by the PRIMO algorithm using the same data set. Credits: L. Medeiros (Institute for Advanced Study)

Without any presumed predictions, the team still managed to get a clear shape of a ring of light as Einstein’s theory of general relativity predicted. The appearance of a ring is attributed to the hot material orbiting the black hole in a large, flattened disc that becomes distorted and bends due to the black hole’s gravitational pull. As a result, this ring shape is observable from almost any viewing angle.

Now that we are pretty certain about what the image of a black hole should look like, scientists have developed a new technique called PRIMO (Principal-Component Interferometric Modeling) which uses sparse coding to find gaps in the input data. This algorithm builds on the initial data of EHT and more precisely fills in the missing gaps hence, achieving more resolution.  

The newly reconstructed image is consistent with the theoretical expectations and shows a narrower ring with a more prominent symmetry. The greater the detail in an image, the more accurately we can understand the properties, such as the ring’s mass, diameter, and thickness.

Project lead author Lia Medeiros of the Institute for Advanced Study highlighted in her paper, “Since we cannot study black holes up-close, the detail of an image plays a critical role in our ability to understand its behaviour. The width of the ring in the image is now smaller by about a factor of two, which will be a powerful constraint for our theoretical models and tests of gravity.”

Techniques like PRIMO can also have applications beyond black holes. As Medeiros stated: “We are using physics to fill in regions of missing data in a way that has never been done before by using machine learning. This could have important implications for interferometry, which plays a role in fields from exo-planets to medicine.”

You can find more detail about the mentioned method in their paper published in The Astrophysical Journal letters.

(Ref: https://iopscience.iop.org/article/10.3847/2041-8213/acc32d)

AI’s Role in Detecting Water on Exoplanets

The study of extra-solar planets is one of the most fascinating and attractive fields of research in astronomy. As humans, our innate curiosity drives us to seek answers about the existence of life elsewhere in the universe. The exploration begins with the question of detecting water in exoplanets and other terrestrial bodies that might indicate the formation of life.

Astronomers have come across many techniques, most prominently spectroscopy, where the signatures of molecules in a celestial body can be detected. However, the time-intensive nature of spectroscopy creates a huddle for short observations. Therefore, there is a need for a simpler yet much more efficient method where the initial characterization of potential targets is separated before conducting detailed spectroscopic analysis at a later stage. This specific problem is being addressed by utilizing AI.

Artist’s view of an exoplanet where liquid water might exist. Image Credit: ESO/M. Kornmesser
Artist’s view of an exoplanet where liquid water might exist. Image Credit: ESO/M. Kornmesser

In a recent study, astrophysicists Dang Pham and Lisa Kaltenegger have used XGBoost, a gradient-boosting technique to characterize the existence of water in Earth-like terrestrial exoplanets in three forms; seawater, water clouds and snow. The algorithm is trained using the data of reflected broadband photometry, in which the intensity flux in specific wavelengths is measured from the reflected light of an exoplanet. The model shows promising results and achieves >90% accuracy for snow and cloud detection and up to 70% accuracy for liquid water.

In this way, a larger number of planets within the habitable zone having water signatures can be screened so that large projects like JWST can pinpoint and analyze extensively only the most favourable targets.  According to Dr Pham: “By ‘following the water’, astronomers will be able to dedicate more of the observatory’s valuable survey time to exoplanets that are more likely to provide significant returns.”

Their recent publication of their findings can be found in the Monthly Notices of the Royal Astronomical Society.

(Ref: Pham, D., & Kaltenegger, L. (2022). Follow the water: finding water, snow, and clouds on terrestrial exoplanets with photometry and machine learning. Monthly Notices of the Royal Astronomical Society: Letters, 513(1), L72-L77)

Conclusion

Through ongoing research and advancement, AI continues to shape the future of astronomical exploration, enabling scientists to delve deeper into the vast expanse of the universe. Deep learning models like Convolution neural networks are revamping observational data in innovative ways, enabling discoveries even with data collected from older surveys.

We can only imagine what groundbreaking discoveries AI will bring when it is coupled with the powerful potential of the James Webb Space Telescope and upcoming projects like the Nancy Grace Roman Telescope. These visionary projects open doors to a realm of revolutionary discoveries, while the ever-expanding volume of astronomical data can now be harnessed to its fullest potential, thanks to the innovative advancements brought forth by the age of AI.

References:

  1. Djorgovski, S. G., Mahabal, A. A., Graham, M. J., Polsterer, K., & Krone-Martins, A. (2022). Applications of AI in Astronomy. arXiv preprint arXiv:2212.01493.
  2. Fluke, C. J., & Jacobs, C. (2020). Surveying the reach and maturity of machine learning and artificial intelligence in astronomy. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(2), e1349.
  3. Impey, C. (2023, May 23). How AI is helping astronomers. EarthSky. Retrieved from https://earthsky.org/space/artifical-intelligence-ai-is-helping-astronomers-make-new-discoveries/https://www.astronomy.com/science/how-artificial-intelligence-is-changing-astronomy/
  4. Pham, D., & Kaltenegger, L. (2022). Follow the water: finding water, snow, and clouds on terrestrial exoplanets with photometry and machine learning. Monthly Notices of the Royal Astronomical Society: Letters, 513(1), L72-L77)
  5. Medeiros, L., Psaltis, D., Lauer, T. R., & Özel, F. (2023). The Image of the M87 Black Hole Reconstructed with PRIMO. The Astrophysical Journal Letters.
  6. NAOJ. (2020, August 11). Classifying Galaxies with Artificial Intelligence. Retrieved from https://www.nao.ac.jp/en/news/science/2020/20200811-subaru.html

Also Read: AI AND NEUROBIOLOGY: UNDERSTANDING THE BRAIN THROUGH COMPUTATIONAL MODELS

The Future of AI on Education

Artificial intelligence (AI) is a fast-growing technology capable of transforming every aspect of life. AI has long been used in machine learning, robotics, medical procedures, and automobiles. However, the world has come to recognize the power of AI since its integration into online chatting software, ‘the chatGPT,’ has become accessible.

The purpose of this advancement is human welfare and progress; one sensitive area seems to be influenced greatly by it, which is ‘Education.’ Education planners and teachers are concerned about how this facility will lead to betterment in learning.

In the past, students and teachers used to spend a lot of time researching and analyzing information for assignments or articles. It was tedious to go through multiple sources, study the literature, and perform critical analysis to create something new.

However, with the advent of Artificial Intelligence (AI), this process has become much faster, more efficient and more reliable. AI has emerged as a valuable tool that saves time and provides essential information related to the topic while ensuring authenticity with proper references.

AI has revolutionized how students and teachers approach academic research, helping them accomplish more in less time. The technology can quickly and accurately analyze large volumes of data, and identify patterns, trends, and relationships, thus assisting scholars in discovering valuable insights that can be used to support their arguments or ideas. Additionally, AI-powered applications like plagiarism checkers can easily detect copied content, making it easier for teachers to evaluate the originality of student assignments.

Education could benefit a lot from AI by helping teachers customize learning, making it better by adjusting it according to the needs of the student. With AI-enhanced learning, students can learn at their own pace, get personalized attention, and understand what they are learning. AI can be used in many ways in education, such as intelligent tutoring systems and natural language processing. These systems help improve STEM courses by adapting lessons to each student’s individual needs.

Education could benefit a lot from AI by helping teachers customize learning, making it better by adjusting it according to the needs of the student.
Education could benefit a lot from AI by helping teachers customize learning, making it better by adjusting it according to the needs of the student.

What is Cognitive Technology?

Students can receive intellectual direction in the areas of planning, problem-solving, and decision-making through cognitive technology. The development of educational material that is enhanced by technology may be made easier by cognitive computing. Big data analysis utilizing machine learning can be used to forecast student achievement. Machine learning algorithms are currently utilized to identify pupils who are most likely to fail and suggest treatments.

Adaptive learning management systems (ALMS):

Another way education is improving is with adaptive learning management systems. These systems match the student’s learning styles, preferences, and backgrounds with the curriculum to keep them motivated and on track. They can also suggest careers and connect students with job opportunities.

Natural Language Processing (NLP):

Natural Language Processing can be useful in education. It can help chatbots and algorithms provide instant communication and personalized responses to students, which can increase their focus and interest in digital learning. NLP can also be used to analyze the tone of essays written by students.

What is Sustainable Development Goal No. 4, and how can AI help to achieve these Goals?

Artificial intelligence has the potential to revolutionize how people educate and learn, accelerating toward SSG 4 (Sustainable Development Goal 4 is about quality education and is among the 17 Sustainable Development Goals established by the United Nations in September 2015.) The education goals of the 2030 Framework can be accomplished with the help of artificial intelligence technologies. UNESCO (United Nations Educational, Scientific and Cultural Organization) is devoted to aiding its members in doing so while guaranteeing that its use within learning environments is governed by the fundamental values of equality and diversity.

AI and Big Data

As science brings new ideas and information, students and researchers find it difficult to sort through the massive amount of data and information available. AI can help by analyzing big data and making this process more efficient.

AI can also help develop better teaching methods and manage data from different sources.
AI can also help develop better teaching methods and manage data from different sources.

AI can help with analyzing large amounts of data in education. It can predict outcomes like students dropping out or not doing well and take steps to prevent that. AI can create personalized learning plans based on a student’s preferences, strengths, and weaknesses. It can also assess a student’s knowledge and skills and design lessons accordingly.

AI can also help develop better teaching methods and manage data from different sources. AI can help schools and colleges improve learning outcomes and resource utilization by analyzing data.

Can AI make students less creative?

When used appropriately, AI can enhance students’ creativity by providing them with new learning and problem-solving tools. AI technologies, such as natural language processing and machine learning, can help students explore and analyze data in sophisticated ways, leading to new insights and innovative solutions to complex problems. AI-powered virtual assistants and chatbots can also offer personalized feedback and guidance, motivating students to explore new ideas and approaches.

However, too much reliance on AI can potentially hinder creativity by limiting students’ ability to think critically and independently. If students solely depend on AI to provide them with answers or solutions, they may lose the drive to experiment and explore different approaches. Therefore, striking a balance between AI tools and traditional skills, like critical thinking, problem-solving, and creativity, is essential to nurture well-rounded and innovative individuals.

Has artificial intelligence (AI) taken over teaching?

AI is not providing an alternative for teachers. Teachers will play their roles not just to deliver the data but to create the data according to the student’s intellectual ability, and teachers will have remained the central hub in the educational system. AI would also assist the teacher in creating a plan based on data to present to the learners.

The most crucial point is that AI cannot change place with the teacher as the teacher provides emotional support to the students. AI cannot provide creativity or passion or can act as a guardian or guide as the teacher does.

AI can also be used as the assistant of the teacher as it helps in the grading of exams. In some parts of the world, there is Ai grading system is already practised. For example, China has incorporated paper-grading artificial intelligence into their classrooms, according to the South China Morning Post.

AI has transformed traditional teaching methods into a more flexible and creative style. This allows teachers to understand better their students’ strengths and weaknesses in certain subjects, helping them to provide targeted support where needed.

Implications of AI in Education

AI can help the students to assist their level the constant feedback and help them in their learning abilities. AI help the students to self-monitor by receiving personalized feedback.

AI also help students to get access to information for free or at a low cost. It lends a hand to the students to learn free at any place with no need for a classroom at a fixed time aids the students in saving money.

Artificial Intelligence and Ethical Concerns

AI has a major concern with the privacy of the students. The future of AI in education depends on the protection of student’s personal information, behaviour analysis and feedback reports. There must be a law regarding the control of the personal information of students and teachers. No third party must not have any sort of access to the student’s personal information.

The Concluding Note

In conclusion, Artificial Intelligence imposes great productivity and revolutionises education by providing the intellectual base framework, feedback, and equal quality of free education. Teachers are totally replaced by AI, or it poses any danger to the intellectual mind in their fields, but it works with them as an augmented system in education. Though AI has many positive aspects, there is a threat lie of stealing the personal information of students and teachers. In general, AI technology has the ability to improve education and help students realize their full potential.

References

Altaf, M., & Javed, F. (2018). The Impact of Artificial Intelligence on Education. Journal of Information Systems and Technology Management, 15(3), e201818006.

Arslan-Arı, İ., & Karaaslan, M. I. (2019). The Role of Artificial Intelligence in Education: A Review Study. Educational Sciences: Theory and Practice, 19(4), 1463-1490.

Hwang, G., & Wu, P. H. (2018). Applications, Impacts and Trends of Mobile Technologies in Augmented Reality-Based Learning: A Review of the Literature. Journal of Educational Technology & Society, 21(2), 203-222.

Shinde, R., & Bharkad, D. (2019). The Future of Learning with Artificial Intelligence. International Journal of Engineering & Technology, 11(4), 241-247.

Wierstra, R., Schaul, T., Peters, J., & Schmidhuber, J. (2014). Natural Evolution Strategies. Journal of Machine Learning Research, 15, 949-980.

Beijing Consensus on Artificial Intelligence and Education. 2019, UNESCO.

Also Read: AI AND THE FUTURE OF PROSTHETICS