15.1 C
Pakistan
Wednesday, February 11, 2026
Home Blog Page 26

“Towards Singularity- Inspiring AI”: A Captivating Journey into the Future of Artificial Intelligence

“Towards Singularity- Inspiring AI” is a thought-provoking documentary that takes viewers on an exhilarating journey into the ever-evolving Artificial Intelligence (AI) world—exploring the potential of AI to revolutionise our lives and shape the future of humanity. This film delves deep into the possibilities and implications of reaching the long-discussed technological Singularity.

While we constantly hear severe warnings about the dangers of building intelligent robots, neuro psychotherapist and filmmaker Matthew Dahlitz from the University of Queensland believes that we shouldn’t be worrying, at least now. 

Several experts featured in the documentary are Professor Geoffrey Goodhill, Professor Pankaj Sah, Dr Peter Stratton, Professor Michael Milford etc., from the Queensland Brain Institute QBI.

According to Dahlitz, the title of the movie, Towards Singularity, alludes to a hypothetical time when machines surpass the intelligence of their human creators. According to a few experts, this period may also mark the inevitable and irreversible tipping point in technology and artificial intelligence (AI).

Towards Singularity examines how neuroscience influences the creation of AI. The emergence of intelligent machines is influenced by the complexity of our incredible brain, one of the most intricate systems we know. These machines may be more intelligent than humans, potentially creating a new species. The documentary also incorporates interviews with several experts from UQ’s Queensland Brain Institute (QBI), which examines how brain science is used to guide the creation of super-intelligent computers. 

Dahlitz said, “The media is frequently theatrical, suggesting that the world is about to end in a decade or two due to the dangerous of AI.”

“However, after we began speaking with academics, who are very connected to the topic, we discovered that most specialists say there is no need for concern. I had hoped that we might be able to acquire some speculation about ‘the singularity dangers for dramatic effect, but we couldn’t. There isn’t much stress about what will happen because the researchers were optimistic. One of the strong focuses of “Towards Singularity – Inspiring AI” is its ability to showcase the positive impact of AI on various industries.

Dr Peter Stratton, a researcher and QBI Honorary Research Fellow, explains in the documentary. “We choose what information we want computers to learn, then develop mathematical formulas that specify how that network learns.

Therefore, the data we feed the machine fully determines its level of intelligence. So it is totally up to us what we feed into those machines. According to Dr Stratton, AI is “brain-inspired” but not truly brain-like.”While the core processing components of these networks resemble neurons, they are trained very differently from how the brain functions. Instead of learning in a more natural, self-organising way like the human brain, they receive mathematical training.

“The biggest threat with AI is not that it decides it wants to compete with humans and wipe us out; it is the risk of unintended consequences.” ~Dr Peter Stratton

In conclusion, “TSI- AI” offers a captivating picture of the future of AI, showcasing its potential benefits and ethical considerations. It manages to strike a balance between accessibility and depth, making it a valuable watch for anyone intrigued by the advancements in AI and its potential implications for society.

I highly recommend this documentary as it implies, “Do not fear the rise of machines”. The machines are there to help us, not to compete with us. As media and movies like Transformers have created a negative image of machines and AI that one day they will rule us, that’s quite not right.

Also, Read: Refining the visual experience through AI: DALL-E

From Moore’s Law to AI Revolution: Transforming Innovation Landscape

Moore’s law states that the number of transistors will increase by twice on a chip every two years. If we look around at the pace at which everything progresses, we can apply the same law to every other technology, i.e., exponential growth.

Consider this: the first computer, ENIAC, was invented in 1945, and just 24 years later, NASA achieved the remarkable feat of landing a person on the moon in 1969. How was it possible? Answer: Apollo Guidance Computer (AGC) played a vital role in the success of lunar landings enabling the safe journey of the spacecraft to the moon and back to the Earth.

The significance of AGC is evident from the fact that it allowed the Apollo modules to travel to the moon safely and return to the Earth in one piece. Furthermore, compared to the computers of that time, scientists had to build a computer that was not only small but also much more powerful.

The development of the computer had a significant influence since it made it possible for people to set foot on the moon. Computers have changed from massive equipment to small gadgets with better performance and more valuable outcomes. Computers have become an essential component of our life in the modern world.

A good example is the smartphone you have in your pocket. Unlike the reliance on newspapers in the past, it offers rapid access to local and international news, saving us a great deal of time. Interacting quickly with someone on the other side of the globe has revolutionized communication, as it used to take months for a letter to reach.

Recent advancement in technology aids us in speeding up our daily routine and processes, enabling us to utilize our time much more effectively. This speeding up of processing fosters rapid and effective innovation and technological advancement, significantly impacting our daily lives.

The launch of ChatGPT in November 2022 revealed the full potential of AI technology, marking a significant technological turning point. ChatGPT has significantly impacted the IT industry, sparking community conversations and discussions. Upon closer inspection, it is evident that ChatGPT has dramatically increased productivity and unlocked new levels of creativity in various sectors.

As we delve into the subject, we first examine what exactly artificial intelligence is.

What is AI?

John McCarthy, emeritus professor of computer science at Stanford University, defined Artificial Intelligence in his 2004 paper, “What is Artificial Intelligence.” It states that:

“It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to biologically observable methods.” 

ChatGPT has significantly impacted the IT industry, sparking conversations and discussions across communities.
ChatGPT has significantly impacted the IT industry, sparking community conversations and discussions.

Artificial intelligence is a topic that, in its most basic form, combines computer science and substantial datasets to facilitate problem-solving. Additionally, it includes the branches of artificial intelligence known as deep learning and machine learning, which are commonly addressed together. These fields use AI algorithms to build expert systems that predict or categorize information based on incoming data (IBM).

How AI helps?

Particularly in the context of exponential development, artificial intelligence is playing a revolutionary role in accelerating innovation.

A critical stage in innovation is creativity. Ideation means brainstorming different ideas related to a specific topic in order to find the most feasible one. This process is often time-consuming and sometimes requires careful consideration. With the release of AI tools such as chat-GPT, it has become much easier to brainstorm ideas about a specific topic.

Massive amounts of data may be processed by AI systems, which can then find patterns and correlations that people would not immediately see. As a result, researchers and innovators can better understand the world around them, make data-driven decisions, and spot fresh chances for invention.

Analytics forecasting future trends and results are based on historical data analysis and AI algorithms. Businesses and innovators may make proactive decisions and adjust to changing conditions more quickly, anticipating client needs, market demands, and technical improvements.

AI-powered automation allows it to streamline and optimize boring and repetitive tasks, freeing human resources to devote themselves to more inventive and creative projects. This increased efficiency makes faster development cycles and the opportunity to consider more options possible.

AI algorithms can optimise complicated systems and processes by simulating several situations and determining the most effective configurations. This shortens the time and expense of development by enabling innovators to quickly test and improve ideas without the requirement for physical prototyping.

Artificial intelligence provides the capability to speed-test a system with greater accuracy and offer much more precise results and feedback on how the system would perform in the real world by testing it in a simulated environment based on real-life scenarios. 

Artificial General Intelligence: "AI under the hood - AI represented here by geometric matrices has a go at generating cellular data. It represents a future whereby AI could, in theory, replicate or generate new organic structures used in areas of research such as medicine and biology." Artist: Domhnall Malone
Artificial General Intelligence: “AI under the hood – AI represented here by geometric matrices has a go at generating cellular data. It represents a future whereby AI could, in theory, replicate or generate new organic structures used in research areas such as medicine and biology.” Artist: Domhnall Malone

Faster information retrieval, analysis, and comprehension are now possible thanks to AI-powered language processing and machine learning approaches. By utilizing these tools, innovators can quicken their learning and creativity processes by keeping up with the most recent findings, scientific advancements, and industry best practices.

Real-Life Examples

Here are a few examples of where AI has transformed the innovation process.

In 2020, Alphafold, a Google’s Deep Mind subsidiary, introduced a technology that can predict the shape of highly complex protein structures in minutes. 

As stated by Alphafold, one of the significant issues in biology is predicting the 3D structure of proteins. We may significantly deepen our understanding of human health, disease, and our environment by overcoming this obstacle, especially in areas like drug development and sustainability.

Proteins support all biological activities in all living things, not simply those within your body. They serve as the basis for life. The ability to forecast the structures of millions of as-yet-unidentified proteins would help us better comprehend life itself and help us fight sickness, and identify new treatments more quickly.

The latest data release of Alphafold secures around 200 million protein structures (Alphafold).

Now imagine the quickened pace at which scientists will be able to have a better understanding of diseases and the development of the right drugs to counter them. In an example provided by Alphafold, researchers at the Center of Enzyme Innovation (CEI) are already using Alphafold to uncover and recreate enzymes that can break down single-use plastics.

Another example, or another AI tool I have encountered recently, is Copilot ai. I was working on an academic writing project the other day and thought that if there is a chatGPT-type AI tool that helps a person understand the research paper much more quickly and efficiently. 

You see here, even I am looking for tools that will speed up the working process. This is exactly what I am writing about – AI helping us speed up the innovation process.

At first, I should seek help from a developer to develop an AI tool for research purposes. But, not surprisingly, I found similar tools on the internet. The tools allow you to converse with the research paper and help find more papers on a similar topic. 

It usually takes a lot of time for me to read and adequately understand a research paper, but with these helpful tools, I could complete my project relatively quickly. Which ultimately allowed me to work on more projects in the same amount of time allocated for just one project.

Considering the above examples, I can say for sure that AI will definitely transform the way of innovation and technological advancement for us. 

As we can see the rapidly evolving field of AI, we can also see its potential to transform the aspects of human lives. From healthcare to finance to entertainment, AI is helping us in countless ways. It enables us to unlock new levels of creativity and productivity.

If it keeps evolving like this, it isn’t that far that we will be able to answer some of nature’s biggest mysteries. But one thing that may be of concern to some. Since this technology is still in its infancy, experts are unsure about the extent it can reach its full potential. Hence, it is necessary to regulate the use of AI to ensure that the tech is being used for the betterment of humanity.

References

ScienceDirect, Technology Review, Insights, ML-Science, IBM, Alphafold, Deepmind

Also read: HOW DATA SCIENCE ACCELERATES SCIENTIFIC PROGRESS

Harnessing the Potential of AI in Modern Astronomy

Artificial Intelligence (AI) systems have affected our world in many ways since their rise in the 1950s and has made a profound impact across a wide range of daily applications, making it one of the fastest-growing technologies globally. Its uses range from automating digital tasks and making predictions to enhancing efficiency and supporting smart language assistants.

Whether it be various businesses or architects, almost every other profession is leveraging AI to enhance productivity in their workflows. It is natural to question whether astronomers are utilizing AI to understand the universe better and, if so, what approaches they are taking. In fact, they have embraced the potential of Machine Learning (a subset of AI) since the 1980s, so much so that AI has become a standard part of the astronomer’s toolkit. This article highlights the eminent need for such systems in astronomical data analysis and dives deep into some recent applications where AI is employed.

The launch of the Hubble Space Telescope revolutionized the field of astronomy, yielding stunning imagery and essential data that has fundamentally altered our understanding of the universe. Today, driven by extraordinary advancements in AI, astronomy is experiencing ongoing evolution, uncovering significant insights that may elude human observation. Methods like Machine learning and neural networks have enabled classification, regression, forecasting, and discovery, leading to new knowledge and new insights.

Atacama Large Millimeter/submillimeter Array (ALMA) in Chile
Credits: Babak Tafreshi
Atacama Large Millimeter/submillimeter Array (ALMA) in Chile
Credits: Babak Tafreshi

The necessity of AI automation

A significant aspect of astronomy revolves around managing big data, where the term ‘big’ refers to Petabytes (1000 terabytes) and even Exabytes (1000 petabytes) of data collected from sky surveys like SDSS, Gaia, TESS and more. For instance, Gaia, a survey mission to map the Milky Way galaxy, collects approximately 50 terabytes of data each day. With the advancement of highly capable computer processing powered by AI, astronomers now possess the ability to analyze such massive volumes of data efficiently, significantly reducing the workload of scientists.

According to Brant Robertson, professor of astronomy at UC Santa Cruz, “There are some things we simply cannot do as humans, so we have to find ways to use computers to deal with the huge amount of data that will be coming in over the next few years from large astronomical survey projects.” 

Even if all of humanity were to dedicate themselves to analyzing the vast amount of astronomical data, it would take an inconceivable long period to deduce meaningful conclusions. However, with the assistance of AI models, simultaneous processing and faster discovery of valuable information are possible, ultimately leading to increased efficiency and much shorter turnaround times. In addition, intelligent machines also improve accuracy and precision, where they can perform repetitive tasks with minimal to no errors.

The Emergence of AI in Astronomy

The utilization of AI techniques has evolved significantly over the years. A paper was published in 2020 titled; “Surveying the Reach and Maturity of Machine Learning and AI in Astronomy“, which discussed valuable insights into the historical progression of AI in this domain. Since the 1980s, principal component analysis (PCA) and decision trees (DT) have been employed for tasks such as morphological classification of galaxies and redshift estimation.

As the field advanced, artificial neural networks (ANNs) emerged as a widely used tool for galaxy classification and detection of gamma-ray bursts (GRBs) during the early stages of their implementation. The application of ANNs has since expanded to encompass diverse areas, including pulsar detection, asteroid composition analysis, and the identification of gravitationally lensed quasars.

Today, astronomers use a plethora of techniques that have resulted in exciting approaches involving the discovery of exoplanets, forecasting solar activity, classification of gravitational wave signals and even reconstruction of an image of a black hole.

I will explore three pivotal applications where the integration of AI plays a crucial role in solving complex problems, in turn shaping our understanding of the cosmos:

AI-Driven Morphology Classification of Galaxies

The classification of galaxies, whether they are elliptical, spiral, or irregular, enables us to gain insights into their overall structure and shape. This understanding is instrumental in estimating their composition and evolutionary trajectory, making it a fundamental objective in modern cosmology.

The advent of extensive synoptic sky surveys has led to an overwhelming volume of data that surpasses human capacity for scrutiny based on morphology alone. Since the 2000s, machine learning (ML) has appeared as the predominant solution to tackle this challenge and has effectively taken over the task of classifying galaxies. The classification of large astronomical databases of galaxies empowers astronomers to test theories and draw conclusions that reveal the underlying physical processes driving star formation and galaxy evolution.

The Deep Learning era brought forth Artificial Neural Networks (ANNs) that have accelerated the efficiency of classification and regression tasks by many folds. ANNs are computational models inspired by the human brain’s neural networks, capable of learning patterns and making predictions from large datasets. The input layer receives galaxy data, which is processed through hidden layers that perform complex computations. The output layer then generates classifications based on learned patterns. Each galaxy in the dataset is represented by a set of input features, such as photometric measurements or morphological properties derived from images.

images of the Subaru survey being classified by the model through prediction probabilities of each class. Credits: Tadaki et al. (2020)
Images of the Subaru survey being classified by the model through prediction probabilities of each class. Credits: Tadaki et al. (2020)

While the vast volume of data can introduce model biasing, citizen scientists worldwide have collaborated through initiatives like Galaxy Zoo and Galaxy Cruise, playing a crucial role in validating the model results.  This collective effort has effectively improved the accuracy of neural networks in classifying galaxies. Under the National Astronomical Observatory of Japan (NAOJ) project led by

Dr Ken-ichi Tadaki, ANNs have achieved an impressive accuracy level of  97.5%, where they identified spirals in about 80,000 galaxies. Thus, confirming the potential of AI systems in identifying the morphology of galaxies.

Reconstructing Black Hole images using Machine Learning

If you ask me what this century’s most remarkable scientific achievement is thus far, I would say that the black hole image revealed in 2019 would undoubtedly claim the top spot on the list. We get to see what a real Supermassive Black hole in Messier 87 looks like if we were there to see it.

Behind all the awe lies the immense dedication of the Event Horizon Telescope team, who invested two years in observing, processing, and eventually unveiling the black hole image to the public. Recently, the same data underwent a significant enhancement with Machine Learning, where we got a crisper, more detailed view of the light around the M-87 black hole. But then again, what was the need to use ML in the first place if we already got that incredible image back in 2019?

The Event Horizon Telescope is a network of eight radio telescopes in different areas of the globe, aiming to link them into a single array so that we can get an Earth-sized telescope. However, data gaps arise due to the irregular spacing between them, just like missing pieces in a jigsaw puzzle.

At first, scientists tried to blindly reconstruct the absent data from computer simulations and theoretical predictions. The image came up with model independence, which means that they did not assume they knew anything about what the final image should look like or had any idea of what shape it takes.

Left: Original 2019 photo of the black hole in the galaxy M87. Right: New image generated by the PRIMO algorithm using the same data set. Credits: L. Medeiros (Institute for Advanced Study)
Left: Original 2019 photo of the black hole in the galaxy M87. Right: New image generated by the PRIMO algorithm using the same data set. Credits: L. Medeiros (Institute for Advanced Study)

Without any presumed predictions, the team still managed to get a clear shape of a ring of light as Einstein’s theory of general relativity predicted. The appearance of a ring is attributed to the hot material orbiting the black hole in a large, flattened disc that becomes distorted and bends due to the black hole’s gravitational pull. As a result, this ring shape is observable from almost any viewing angle.

Now that we are pretty certain about what the image of a black hole should look like, scientists have developed a new technique called PRIMO (Principal-Component Interferometric Modeling) which uses sparse coding to find gaps in the input data. This algorithm builds on the initial data of EHT and more precisely fills in the missing gaps hence, achieving more resolution.  

The newly reconstructed image is consistent with the theoretical expectations and shows a narrower ring with a more prominent symmetry. The greater the detail in an image, the more accurately we can understand the properties, such as the ring’s mass, diameter, and thickness.

Project lead author Lia Medeiros of the Institute for Advanced Study highlighted in her paper, “Since we cannot study black holes up-close, the detail of an image plays a critical role in our ability to understand its behaviour. The width of the ring in the image is now smaller by about a factor of two, which will be a powerful constraint for our theoretical models and tests of gravity.”

Techniques like PRIMO can also have applications beyond black holes. As Medeiros stated: “We are using physics to fill in regions of missing data in a way that has never been done before by using machine learning. This could have important implications for interferometry, which plays a role in fields from exo-planets to medicine.”

You can find more detail about the mentioned method in their paper published in The Astrophysical Journal letters.

(Ref: https://iopscience.iop.org/article/10.3847/2041-8213/acc32d)

AI’s Role in Detecting Water on Exoplanets

The study of extra-solar planets is one of the most fascinating and attractive fields of research in astronomy. As humans, our innate curiosity drives us to seek answers about the existence of life elsewhere in the universe. The exploration begins with the question of detecting water in exoplanets and other terrestrial bodies that might indicate the formation of life.

Astronomers have come across many techniques, most prominently spectroscopy, where the signatures of molecules in a celestial body can be detected. However, the time-intensive nature of spectroscopy creates a huddle for short observations. Therefore, there is a need for a simpler yet much more efficient method where the initial characterization of potential targets is separated before conducting detailed spectroscopic analysis at a later stage. This specific problem is being addressed by utilizing AI.

Artist’s view of an exoplanet where liquid water might exist. Image Credit: ESO/M. Kornmesser
Artist’s view of an exoplanet where liquid water might exist. Image Credit: ESO/M. Kornmesser

In a recent study, astrophysicists Dang Pham and Lisa Kaltenegger have used XGBoost, a gradient-boosting technique to characterize the existence of water in Earth-like terrestrial exoplanets in three forms; seawater, water clouds and snow. The algorithm is trained using the data of reflected broadband photometry, in which the intensity flux in specific wavelengths is measured from the reflected light of an exoplanet. The model shows promising results and achieves >90% accuracy for snow and cloud detection and up to 70% accuracy for liquid water.

In this way, a larger number of planets within the habitable zone having water signatures can be screened so that large projects like JWST can pinpoint and analyze extensively only the most favourable targets.  According to Dr Pham: “By ‘following the water’, astronomers will be able to dedicate more of the observatory’s valuable survey time to exoplanets that are more likely to provide significant returns.”

Their recent publication of their findings can be found in the Monthly Notices of the Royal Astronomical Society.

(Ref: Pham, D., & Kaltenegger, L. (2022). Follow the water: finding water, snow, and clouds on terrestrial exoplanets with photometry and machine learning. Monthly Notices of the Royal Astronomical Society: Letters, 513(1), L72-L77)

Conclusion

Through ongoing research and advancement, AI continues to shape the future of astronomical exploration, enabling scientists to delve deeper into the vast expanse of the universe. Deep learning models like Convolution neural networks are revamping observational data in innovative ways, enabling discoveries even with data collected from older surveys.

We can only imagine what groundbreaking discoveries AI will bring when it is coupled with the powerful potential of the James Webb Space Telescope and upcoming projects like the Nancy Grace Roman Telescope. These visionary projects open doors to a realm of revolutionary discoveries, while the ever-expanding volume of astronomical data can now be harnessed to its fullest potential, thanks to the innovative advancements brought forth by the age of AI.

References:

  1. Djorgovski, S. G., Mahabal, A. A., Graham, M. J., Polsterer, K., & Krone-Martins, A. (2022). Applications of AI in Astronomy. arXiv preprint arXiv:2212.01493.
  2. Fluke, C. J., & Jacobs, C. (2020). Surveying the reach and maturity of machine learning and artificial intelligence in astronomy. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(2), e1349.
  3. Impey, C. (2023, May 23). How AI is helping astronomers. EarthSky. Retrieved from https://earthsky.org/space/artifical-intelligence-ai-is-helping-astronomers-make-new-discoveries/https://www.astronomy.com/science/how-artificial-intelligence-is-changing-astronomy/
  4. Pham, D., & Kaltenegger, L. (2022). Follow the water: finding water, snow, and clouds on terrestrial exoplanets with photometry and machine learning. Monthly Notices of the Royal Astronomical Society: Letters, 513(1), L72-L77)
  5. Medeiros, L., Psaltis, D., Lauer, T. R., & Özel, F. (2023). The Image of the M87 Black Hole Reconstructed with PRIMO. The Astrophysical Journal Letters.
  6. NAOJ. (2020, August 11). Classifying Galaxies with Artificial Intelligence. Retrieved from https://www.nao.ac.jp/en/news/science/2020/20200811-subaru.html

Also Read: AI AND NEUROBIOLOGY: UNDERSTANDING THE BRAIN THROUGH COMPUTATIONAL MODELS

The Future of AI on Education

Artificial intelligence (AI) is a fast-growing technology capable of transforming every aspect of life. AI has long been used in machine learning, robotics, medical procedures, and automobiles. However, the world has come to recognize the power of AI since its integration into online chatting software, ‘the chatGPT,’ has become accessible.

The purpose of this advancement is human welfare and progress; one sensitive area seems to be influenced greatly by it, which is ‘Education.’ Education planners and teachers are concerned about how this facility will lead to betterment in learning.

In the past, students and teachers used to spend a lot of time researching and analyzing information for assignments or articles. It was tedious to go through multiple sources, study the literature, and perform critical analysis to create something new.

However, with the advent of Artificial Intelligence (AI), this process has become much faster, more efficient and more reliable. AI has emerged as a valuable tool that saves time and provides essential information related to the topic while ensuring authenticity with proper references.

AI has revolutionized how students and teachers approach academic research, helping them accomplish more in less time. The technology can quickly and accurately analyze large volumes of data, and identify patterns, trends, and relationships, thus assisting scholars in discovering valuable insights that can be used to support their arguments or ideas. Additionally, AI-powered applications like plagiarism checkers can easily detect copied content, making it easier for teachers to evaluate the originality of student assignments.

Education could benefit a lot from AI by helping teachers customize learning, making it better by adjusting it according to the needs of the student. With AI-enhanced learning, students can learn at their own pace, get personalized attention, and understand what they are learning. AI can be used in many ways in education, such as intelligent tutoring systems and natural language processing. These systems help improve STEM courses by adapting lessons to each student’s individual needs.

Education could benefit a lot from AI by helping teachers customize learning, making it better by adjusting it according to the needs of the student.
Education could benefit a lot from AI by helping teachers customize learning, making it better by adjusting it according to the needs of the student.

What is Cognitive Technology?

Students can receive intellectual direction in the areas of planning, problem-solving, and decision-making through cognitive technology. The development of educational material that is enhanced by technology may be made easier by cognitive computing. Big data analysis utilizing machine learning can be used to forecast student achievement. Machine learning algorithms are currently utilized to identify pupils who are most likely to fail and suggest treatments.

Adaptive learning management systems (ALMS):

Another way education is improving is with adaptive learning management systems. These systems match the student’s learning styles, preferences, and backgrounds with the curriculum to keep them motivated and on track. They can also suggest careers and connect students with job opportunities.

Natural Language Processing (NLP):

Natural Language Processing can be useful in education. It can help chatbots and algorithms provide instant communication and personalized responses to students, which can increase their focus and interest in digital learning. NLP can also be used to analyze the tone of essays written by students.

What is Sustainable Development Goal No. 4, and how can AI help to achieve these Goals?

Artificial intelligence has the potential to revolutionize how people educate and learn, accelerating toward SSG 4 (Sustainable Development Goal 4 is about quality education and is among the 17 Sustainable Development Goals established by the United Nations in September 2015.) The education goals of the 2030 Framework can be accomplished with the help of artificial intelligence technologies. UNESCO (United Nations Educational, Scientific and Cultural Organization) is devoted to aiding its members in doing so while guaranteeing that its use within learning environments is governed by the fundamental values of equality and diversity.

AI and Big Data

As science brings new ideas and information, students and researchers find it difficult to sort through the massive amount of data and information available. AI can help by analyzing big data and making this process more efficient.

AI can also help develop better teaching methods and manage data from different sources.
AI can also help develop better teaching methods and manage data from different sources.

AI can help with analyzing large amounts of data in education. It can predict outcomes like students dropping out or not doing well and take steps to prevent that. AI can create personalized learning plans based on a student’s preferences, strengths, and weaknesses. It can also assess a student’s knowledge and skills and design lessons accordingly.

AI can also help develop better teaching methods and manage data from different sources. AI can help schools and colleges improve learning outcomes and resource utilization by analyzing data.

Can AI make students less creative?

When used appropriately, AI can enhance students’ creativity by providing them with new learning and problem-solving tools. AI technologies, such as natural language processing and machine learning, can help students explore and analyze data in sophisticated ways, leading to new insights and innovative solutions to complex problems. AI-powered virtual assistants and chatbots can also offer personalized feedback and guidance, motivating students to explore new ideas and approaches.

However, too much reliance on AI can potentially hinder creativity by limiting students’ ability to think critically and independently. If students solely depend on AI to provide them with answers or solutions, they may lose the drive to experiment and explore different approaches. Therefore, striking a balance between AI tools and traditional skills, like critical thinking, problem-solving, and creativity, is essential to nurture well-rounded and innovative individuals.

Has artificial intelligence (AI) taken over teaching?

AI is not providing an alternative for teachers. Teachers will play their roles not just to deliver the data but to create the data according to the student’s intellectual ability, and teachers will have remained the central hub in the educational system. AI would also assist the teacher in creating a plan based on data to present to the learners.

The most crucial point is that AI cannot change place with the teacher as the teacher provides emotional support to the students. AI cannot provide creativity or passion or can act as a guardian or guide as the teacher does.

AI can also be used as the assistant of the teacher as it helps in the grading of exams. In some parts of the world, there is Ai grading system is already practised. For example, China has incorporated paper-grading artificial intelligence into their classrooms, according to the South China Morning Post.

AI has transformed traditional teaching methods into a more flexible and creative style. This allows teachers to understand better their students’ strengths and weaknesses in certain subjects, helping them to provide targeted support where needed.

Implications of AI in Education

AI can help the students to assist their level the constant feedback and help them in their learning abilities. AI help the students to self-monitor by receiving personalized feedback.

AI also help students to get access to information for free or at a low cost. It lends a hand to the students to learn free at any place with no need for a classroom at a fixed time aids the students in saving money.

Artificial Intelligence and Ethical Concerns

AI has a major concern with the privacy of the students. The future of AI in education depends on the protection of student’s personal information, behaviour analysis and feedback reports. There must be a law regarding the control of the personal information of students and teachers. No third party must not have any sort of access to the student’s personal information.

The Concluding Note

In conclusion, Artificial Intelligence imposes great productivity and revolutionises education by providing the intellectual base framework, feedback, and equal quality of free education. Teachers are totally replaced by AI, or it poses any danger to the intellectual mind in their fields, but it works with them as an augmented system in education. Though AI has many positive aspects, there is a threat lie of stealing the personal information of students and teachers. In general, AI technology has the ability to improve education and help students realize their full potential.

References

Altaf, M., & Javed, F. (2018). The Impact of Artificial Intelligence on Education. Journal of Information Systems and Technology Management, 15(3), e201818006.

Arslan-Arı, İ., & Karaaslan, M. I. (2019). The Role of Artificial Intelligence in Education: A Review Study. Educational Sciences: Theory and Practice, 19(4), 1463-1490.

Hwang, G., & Wu, P. H. (2018). Applications, Impacts and Trends of Mobile Technologies in Augmented Reality-Based Learning: A Review of the Literature. Journal of Educational Technology & Society, 21(2), 203-222.

Shinde, R., & Bharkad, D. (2019). The Future of Learning with Artificial Intelligence. International Journal of Engineering & Technology, 11(4), 241-247.

Wierstra, R., Schaul, T., Peters, J., & Schmidhuber, J. (2014). Natural Evolution Strategies. Journal of Machine Learning Research, 15, 949-980.

Beijing Consensus on Artificial Intelligence and Education. 2019, UNESCO.

Also Read: AI AND THE FUTURE OF PROSTHETICS

Refining the visual experience through AI: DALL-E

0

The 21st Century has opened avenues for humankind that were once deemed impossible. Technological evolution offers multipurpose benefits and enables humanity to delve into the gist of scientific knowledge. Astonishing as it may seem, scientific advancement is proof of human intelligence. Ideologies have become practical tools; knowledge has been utilized to create technological miracles: all the gratitude to artificial intelligence. 

“Artificial Intelligence is whatever hasn’t been done yet” -Larry Tesler

DALL-E, an exceptional scientific feat, is a visual storytelling tool. It uses artificial intelligence to create graphic representations of images and translates textual information into images. “DALL-E” is rooted in two words: “Salvador Dali, a famous artist, and Wall-E, Pixar’s movie.”. It is an excellent coalition of heterogenous ideas, concepts, and thoughts all bought together. 

Apart from providing its users with a virtual reality experience, DALL-E is equipped to interpret the intricate relationship between objects making the entire experience transformative and immersive. It is best at transforming ideas into realistic graphics, and also excels at modifying the current images and adding new features to them. Overall it boosts the virtual reality experience for users.

Developed by OPEN AI, DALL-E aims to provide users with a user-friendly, safe, and experimental output. The first model for the system was made public in 2021; however, it had technicalities that needed to be fixed. For example, this version’s image would occasionally be blurred, resulting in a cluttered user interface. The company noticed the issue and modified the initial version of the software to release DALL-E 2 in April 2022. 

The latest version can produce more realistic images and use different styles. DALL-E amalgamates three features: machine learning, natural language processing, and computer vision.

DALL-E 2 image generation process
DALL·E 2 image generation process

Operational Procedure: A Glance

DALL-E works through four different steps to draft the text input to an actual image:

  • Pre-Processing is the first step, wherein the user adds the text describing the image they want to produce. The system converts the text into vectors. Using the language model GPT-3, the software attempts to understand what the user wants.
  • Encoding is the next step, where the textual prompts tuned into vectors are used to create an image the user requested.
  • Decoding or refining is the next step, where the system ensures that the image it creates is refined to the extent that it depicts the realistic nature of the picture. After multiple cycles of refining, the system will evaluate for any required changes.
  • Output is the final step, where the final image is displayed on the user interface. 

Being cutting-edge software in the AI system, it has numerous uses. Irrespective of its benefits, what makes DALL-E stand out? Let’s have a look at it.

  1. Text-to-Image Synthesis

DALL-E has a unique capability to transfer your textual description into graphical representations. It enables you to go out of the box and explore all the possible realities while visually representing how your ideas and thoughts integrate. 

  1. Inventive outputs

Unlike other image-generating tools and software, DALL-E works beyond possibilities. It creates images based on surreal and imaginative ideas and produces results that were once unexpected and impossible—for example, a three-pawed cat or a flying table. Nothing is impossible for this tool to create.

  1. Extensive dataset

The software has been trained to process many data and has been tested multiple times. Thus, it enabled the software to develop and understand the relationship between visual objects and the textual information provided to produce high-quality images.

  1. Meticulously portrayed images

DALL-E has the remarkable potential to generate high-quality images that appear authentic. The image position, colours, appearance, and orientation are optimized according to user input. Thus, it allows to customize the image according to the user’s preferences.

  1. Diverse specialized applications

DALL-E has a wide range of applications, from design to product development. The visual possibilities it offers are endless, a few of which are mentioned below:

  • Education
  • Entertainment
  • Product design
  • Marketing
  • Art
Image generated using DALL-E
Image generated using DALL-E

Perks of DALL-E

This tool has numerous advantages that make it a more accessible user interface. Some of these prominent advantages are explained below:

  1. Customization

You can customize the image generated according to your preference. Whatever you can imagine, any idea that comes to your mind, you simply put a few phrases into the text box and have miraculous results.  

  1. Accessibility 

DALL-E is an easy-to-use software requiring no specialized knowledge or computer language. Almost any individual with basic writing knowledge can easily use the software.

  1. Recapitulation

Users can make multiple iterations with existing images, edit them accordingly and add new features. Images can be iterated quickly and swiftly.

  1. Promptness

It is a quick image-generating tool. With just a few clicks within seconds, your vision is correct before you.

Irrespective of the immense benefits that DALL-E has to offer, there are still concerns out there regarding its use in the real world. Some foundational things could be improved within the software. Although it can process vast amounts of data, there are still images that the software might need help to translate. 

Secondly, there is a slight doubt regarding the input of the text. The text input must be clear and explained for DALL-E to produce the exact image. However, if the text prompt is not well defined, the image produced may not be accurate. Another point of contention is the legitimacy of the system. It has the potential to generate images of any kind, any type, even if the scenario doesn’t exist in the real world.

Is the system legitimate and connected to realism enough for the user to believe it is another issue? Moreover, AI can interpret the literal meaning of the words. Words/phrases with similar meanings confuse the system, producing images contrary to your idea.

Undoubtedly, DALL-E is a revolutionary breakthrough in artificial intelligence allowing its users to go beyond their imagination and take them on a journey of possibilities. However, its practical use and potential to disconnect the user from the real world and phenomena are a considerable concern.

“DALL-E takes us one step closer to realizing the dream of machines that can truly understand and create visual content”-Fei-Fei-Li

References:

Also, Read: From Assistant to the Competitor: The Rise of ChatGPT as a Replacement for human interaction

AI and the Future of Prosthetics

There are no barriers when it comes to the applications of Artificial Intelligence (AI) in the modern world. With the introduction of AI as a first-hand utility tool such as; ChatGPT, the world has entered into a new era. Now people are convinced that enhancing efficiency is the prime objective.

Indeed, we are witnessing the remarkable impact of AI in various fields, from automated text solutions to cybersecurity, transportation, manufacturing, retail, finance, education, and particularly the field of healthcare.

Healthcare and AI

Healthcare has been extremely important forever, directly linked to human quality of life and welfare. AI has transformed modern healthcare diagnosis, treatments and patient care with its extensive applications in medical tests and treatments like; MRI, CT scans, X-rays, diagnostic algorithms, drug delivery, personalized medicine, robotics-assisted surgeries and so on.

One of the most promising and interesting applications of AI is in the field of prosthetics, providing smart and accessible solutions. Gone are the days when artificial limbs had to be static, uncomfortable and detached from the body’s sensational feedback.

What is Prosthetics?

Prosthetics are artificial devices designed to augment a missing or damaged body part such as hands, feet, limbs or a facial feature. According to World Health Organization (WHO), prostheses are artificial devices to replace missing body parts, while orthoses are supportive braces and splints to help damaged parts.

A good prosthetic should deliver both function and aesthetic pleasance, making the amputee feel independent, emotionally comfortable and complete.

Traditional Prosthetics and Challenges

The first prosthetic was a ‘toe’ used by an Egyptian woman around 3000 years ago as it was necessary to wear an Egyptian sandal. Later, during the Dark Ages, prosthetic limbs came into existence, which were mere rigid components with no functional value. Examples of such prosthetics are wooden or metal hands and pegleg, used by sea pirates.

Traditional prosthetics were neither aesthetically pleasant nor exhibited mobility or functional value. These early devices were typically constructed from heavy materials, resulting in bulky and uncomfortable designs. This not only impacted the physical comfort of the wearer but also took a toll on their emotional well-being. Individuals relying on traditional prosthetics often face challenges in carrying out daily tasks independently, leading them to seek assistance despite the use of artificial devices. These challenges prompted the development of modern-day prosthetics.

Modern Prosthetics and the Role of AI

The evolution of prosthetics has been nothing short of extraordinary, traversing centuries of innovation and advancements. From rudimentary wooden peg legs to intricately designed robotic limbs, the field of prosthetics has undergone a remarkable transformation.

A French war surgeon ‘Ambroise Pare’ developed the first functional prosthetics and switched to lighter materials. He used the principles of human physiology to mimic body movements like natural. These prosthetic designs are still used, with added improvements.

The year 1993 marks the introduction of intelligent prosthetics, opening a new direction for smart and sense-controlled prosthetics. Adaptive prosthetics emerged in 1998, which worked on microprocessor, pneumatic and hydraulic-based mechanisms. In 2006, OSSUR (Iceland-based company) developed the fully controlled AI-based power knee. Later, the same company developed the first bionic leg, which connects the mind and machine together.

The first fully integrated artificial limb was developed in 2015 by Blatchford. This used a total of 7 sensors and 4 CPUs to connect it with the body sensations and control. This AI-based system gives a more natural outcome when it comes to routine tasks like sitting, walking, and standing and gives independence to the user.

Today, AI is a necessary feature of modern prosthetics.

By harnessing the power of AI, we can create prosthetics that are more functional, intuitive, and personalized than ever before.

How AI Makes Prosthetics Smart

Artificial Intelligence (AI) is used in modern prosthetics on a similar human natural coordination system principle.

Just like the human sensory organs (eyes, nose etc.) coordinate with effectors (hands, limbs etc.) through the brain, the robotic prosthetics work similarly. These devices use cameras or radiation as sensors and motor devices, connected electrically as effectors. Using complex algorithms and formulas, a central unit acts as the brain.

This central unit (like the brain) is equipped with receiving and interpreting body sensations. It is the most important component, where AI is applied through two basic mechanisms:

Figure: ‘Symbolic Learning’ and ‘Machine Learning’ as Major Mechanisms of AI Prosthetics
Symbolic Learning’ and ‘Machine Learning’ as Major Mechanisms of AI Prosthetics

SL (Symbolic Learning):

It helps to process images, symbols and the environment through a camera lens (computer vision).

ML (Machine Learning):

It helps to process the data input through sensors, storing some of it as memory to adapt and adjust to the user’s needs over time. It uses classifier and prediction algorithms to recognize speech and language, known as a ‘statistical learning mechanism’.

In addition, ML achieves sensory connection to the amputee’s body through a ‘deep learning mechanism’, depending upon CNN (Convolution Neural Network) and RNN (Recurrent Neural Network).

HumansAI Prosthetics
Vision, Speaking and ListeningSymbolic vision and statistical learning
Learning, recognition and memoryArtificial neural network and processor
Object and environment recognitionMachine learning through CNN
Table: Functional Resemblance between Human Sensory Response and AI Prosthetics

Exciting Breakthroughs and Success Stories of AI-Powered Prosthetics

Artificial Intelligence (AI) is used in modern myoelectric prosthetics, where electrodes use muscle impulses to generate and amplify signals that translate into controlled movements. In addition, ‘Peripheral Nerve Interface’ (PNI) and ‘Brain Machine Interface’ (BMI) are employed to understand brain signals and connect smart prosthetics to human voluntary control. Some major breakthroughs in AI-driven prosthetics are:

AI-Based Myoelectric Hands and Bionic Limbs

Balanced voluntary control, jumping obstacles and climbing stairs are some of the most challenging activities for amputees who depend on prosthetics.

Luckily, myoelectric and bionic prosthetics, advanced interventions in the area of prosthetics, have solved this problem by enabling the user to conduct comfortable voluntary actions. This makes use of ‘Electroencephalography’ (EEG) and ‘Electromyography’ (EMG) signals generated through implanted electrodes to take direct nervous messages from the user. Machine learning and memory help make repeated movements smoother and seamless.

Johnny Matheny’s Success with Myoelectric Prosthetic Limb

Johnny Matheny had his arm amputated due to cancer in 2008. He did not give up but collaborated with the APL research team to have the best prosthetic designed. This journey led to the development of advanced MPL (Modular Prosthetic Limb)

Johnny Matheny with his prosthetic arm. Credit: Inspiremore
Johnny Matheny with his prosthetic arm. Credit: Inspiremore

This fully self-controlled limb takes signals from the brain through neural networks and device electrodes. Interestingly, this advanced AI-based prosthetic enabled Johnny to play ‘Amazing Grace’ on a piano.

AI and 3D Printing

The amazing technology of 3D printing combined with AI holds great promise towards the user-centred manufacturing of prosthetics. These wearable parts can be made with greater precision, less time and less budget, while AI helps to meet the individual needs of the wearer’s body shape, weight and perfect sizing.

In addition to enhancing user comfort, 3D printing promises to improve the quality of life worldwide, especially in the poorest areas where affordability is one big challenge.

3D Printed Prosthetic Paw and the Success Story of Millie, a Greyhound

Millie, a greyhound puppy, was adopted by an Australian couple. Sadly, it had a missing front paw, greatly impacting its daily life and emotional well-being. However, the couple’s determination to provide Millie with a comfortable and fulfilling life made them choose an AI-based 3D-printed paw as the last yet best resort.

Millie with its prosthetic paw
Millie with its prosthetic paw. Credits: API

The Inspiring Story of Adrianna Haslet, a professional dancer

A ballroom dancer from Boston, Adrianna suffered a Marathon bombing and later a car accident which made her lose her lower limb and injure her arm badly. Despite the emotional baggage the incident brought, Adrianna stayed put and hopeful for the best. Thanks to AI, Haslet is able to resume her physical fitness and running after years through a prosthetic leg which adapts to her body needs and movements.

Artificial Skin, the E-dermis

Another intelligent feature added to modern prosthetics is the outer skin-like layer to add the real touch further.  In addition to voluntary control, modern AI prosthetics provide the sensation of touch, pressure and pain just like natural skin receptors do. This artificial skin is called an ‘e-dermis’, made of rubber and fabric material, connected through a Peripheral Nerve Interface (PNI) to generate sensations of touch, pain and temperature. 

e-dermis, made of rubber and fabric material

AI-Based Exoskeletons

Exoskeletons are outerwear prosthetics like external coverings or suits. When equipped with AI modulators and networks, these work like a charm. Bionic limbs also use a similar approach where AI makes it possible to keep user intent and control integrated through robotic processors.

Angel Giuffria and her Bionic Limb

An actress and model, Angel Giuffria, was born without a left hand. Trying outdated minimal functional, bulky prosthetics in her childhood made her look for better options. Today, she uses an advanced Bionic left limb with a myoelectric hand, enabling her to perform her favourite activities of biking, archery, yoga and workouts.

Angel uses an advanced Bionic left limb with a myoelectric hand. Credit: Lajos Kalmár

Current Challenges and The Future of AI-Driven Prosthetics

Pioneering the future of prosthetics and healthcare, there is no doubt, AI is making the quality of life better for everyone. However, mishandling of anything may cause trouble; such is the case with AI. The most common challenges with this technology are:

  • Keeping a delicate balance between fully automated AI systems and the human touch will help keep room for correcting malfunctioning. The lack of human touch is one sensitive problem of depending upon computerized and digitalized gadgets or AI. This can put a person at risk in case the automated system malfunctions and there is no option for humans to intervene to correct the error in time.
  • Information theft or biohacking and ethical concerns of data sharing and data leakage can make the devices malfunction or freeze and invade users’ privacy. This may be done intentionally by competitor companies or unintentionally by system errors or weak security locks.
  • Accessibility of technology The 3D printing of customizable prosthetics seems to be a great opportunity for the world, especially in poverty-stricken areas, but this technology is not yet freely accessible.
  • Having sufficiently trained personnel in this area is another challenge. Dealing with the latest technology of smart prosthetics is not everyone’s piece of cake. There is a need for sufficiently trained staff who are aware of technical aspects, troubleshooting and other sensitive areas of the smart prosthetics they are to deal with.
  • Overpricing and availability of AI prosthetic components, especially the chips that work as a main processing unit.
  • Aesthetically Pleasant Designing to ensure maintaining user’s self-confidence, emotional well-being and independence in society. This can be done by making these devices look more natural, adding similar texture and colour as the body part it replaces.
  • Lack of sufficient coordination among engineers, technicians and researchers results in prosthetics lacking in one or more areas.
  • Improving AI prosthetics efficiency greatly depends on collecting data from different populations which are able to test the products before giving feedback. This can help with individual variability and better adaptation to improve upcoming AI prosthetics. However, there is still a gap in collecting sufficient high-quality and diverse data.

Resolving these challenges may be our primary target, stepping ahead into a brighter and better future for AI prosthetics.

AI Prosthetics – The Journey Ahead

The past decade has marked a remarkable journey in the field of AI-based healthcare products, especially the innovation of smart prosthetics. These products have made considerable improvements in human quality of life, both physically and emotionally. Combining the principles of ML, SL, EEG and 3D printing, we have achieved AI-integrated prosthetics which deliver more comfort, sensory value and enhanced functionality.

Current research in the area of smart prosthetics focuses on integrating AR (augmented reality) and improved neural coordination networks, making prosthetics work and look just as natural as the actual body part. In addition, advancements in 3D printing technology are being made to make it the ultimate cost-effective and user-specific solution for the world. This can solve the problem of overpricing and accessibility.

Another area to work on is distant/remote monitoring and maintenance of AI prosthetics. This may help detect any error or abnormal behaviour in time through integrated sensors, which can quickly channel this information to the right department. This may also help avoid any risk of biohacking.

The invention of electronic skin ‘e-dermis’ has helped in adding a natural touch to artificial prosthetics. However, the present prototype, made of rubber sheets and electrodes, looks different from the natural skin appearance. Better collaboration among biologists, physiologists, engineers and concerned departments may help design optimized solutions which are also aesthetically pleasing and look more natural. The selection of the right quality materials for prosthetic manufacturing plays a vital role in this matter.

Better collaboration of information is being made possible through an integration of ‘IoT’ (Internet of Things). This helps make data exchange and remote control more coordinated and seamless.

While the demand for advanced prosthetic solutions continues to grow, entrepreneurs and innovators have a unique opportunity to capitalize on the potential of AI in this field.

A Concluding Note

As we journey forward, let’s stay dedicated to advancing prosthetics through continuous development, innovation, and research. Together, we can empower individuals with limb loss, helping them regain independence and improve their overall well-being.

In the future, AI-powered prosthetics will empower individuals with limb loss to redefine possibilities and embrace a life of newfound independence.

References:

  1. Hassan, M. (2023, January 23). How AI is helping power next-generation prosthetic limbs. Wevolver. https://www.wevolver.com/article/how-ai-is-helping-power-next-generation-prosthetic-limbs
  2. World Health Organization. (2017). WHO standards for prosthetics and orthotics.
  3. Nayak, S., & Das, R. K. (2020). Application of artificial intelligence (AI) in prosthetic and orthotic rehabilitation. In Service Robotics. IntechOpen.
  4. Safonova, O. (2020, December 15). Bioprosthetics: Can They Really Be Controlled With Our Minds? Wevolver. https://www.wevolver.com/article/bioprosthetics-can-they-really-be-controlled-with-our-minds-
  5. Ghazaei, G., Alameer, A., Degenaar, P., Morgan, G., & Nazarpour, K. (2017). Deep learning-based artificial vision for grasp classification in myoelectric hands. Journal of neural engineering, 14(3), 036025.
  6. [John and Marcia Price College of Engineering]. (2019, October 29). Utah Bionic Leg [Video]. YouTube. https://www.youtube.com/watch?v=GHTbK3zJ6OY&ab_channel=UtahCOE
  7. Smith, M. (2021, October 6). Breakthroughs in Prosthetic Technology Promise Better Living Through Design. Redshift. https://redshift.autodesk.com/articles/prosthetic-technology
  8. Kim, M. S., Kim, J. J., Kang, K. H., Lee, J. H., & In, Y. (2023). Detection of Prosthetic Loosening in Hip and Knee Arthroplasty Using Machine Learning: A Systematic Review and Meta-Analysis. Medicina, 59(4), 782.
  9. Kulkarni, P. G., Paudel, N., Magar, S., Santilli, M. F., Kashyap, S., Baranwal, A. K., … & Singh, A. V. (2023). Overcoming Challenges and Innovations in Orthopedic Prosthesis Design: An Interdisciplinary Perspective. Biomedical Materials & Devices, 1-12.
  10. Patel, R. (2021, February 10). A Glimpse Into the Future of Prosthetics: Advanced Sensors, E-Skin, and AI. ALL ABOUT CIRCUITS. https://www.allaboutcircuits.com/news/glimpse-future-prosthetics-advanced-sensors-e-skin-ai/

Note: This article is written with the assistance of Dr Muhammad Mustafa, who is an Assistant Professor at Forman Christian College University (FCCU), Lahore. His main interest in research is Cancer metastasis and the impact of psychological factors on cancer progression. He is known for his work as a faculty trainer and science communicator.

Also Read: GENETICS IN THE SPOTLIGHT: PERSPECTIVES FROM A SCIENTIST AND COMMUNICATOR DR. ALEX DAINIS

AI and Neurobiology: Understanding the Brain through Computational Models

In the realm of scientific exploration, the combination of artificial intelligence and neurobiology has opened new ways of understanding the human brain. It revolutionized healthcare practices. The computational power and vast ability to analyze data can be fascinating tools for understanding the complexities of neurobiology. This article explores how combing these two can lead to significant advancement in neuroscience, diagnostics, and treatments.

Artificial intelligence AI is an exciting blend of science and technology, with an immersive ability to stimulate human intelligence and bring revolutionary advancements across various domains. Neurobiology is the scientific study of functions and structures of the brain related to data processing, decision-making, and interaction with the surrounding environment. AI is directing the world of scientific research to achieve maximum details using algorithms. 

AI is driving unprecedented advancement across different domains. The new findings in neuroscience have influenced AI as scientists have sought to understand and replicate the complex mechanisms of the brain. 

Photo: University of Oxford

Background

The central concept that has shaped the development of artificial intelligence is artificial neural networks (ANNs), which simulate the interconnected nature of neurons in brains. Back in the 1950s, Frank Rosenblatt introduced the concept of perceptron, an early form of an ANN inspired by the structure and learning principles of the brain. It paved the path for more sophisticated models like multi-layer perceptron (MLP). MLP consists of interconnected layers to process information and recognize patterns. 

Another area of inspiration is the brain’s memory, which led to the idea of working memory, a crucial cognitive function in the human brain that has also influenced AI design. Recurrent neural networks (RNNs) were developed to capture the temporal nature of Data and enable using past outputs as inputs for predicting future outputs. 

AI-Brain Odyssey

The combination of AI and the brain takes us on an exciting journey through the fascinating world of comprehension in artificial intelligence. AI- Neuroscience provides insight into the brain’s workings and can benefit in various ways. We dive into how neuroscience research has influenced the development of AI algorithms.

This led us to the perception models, replicating human sensory processing, memory, and recall mechanisms for efficient information processing. It also enables machines to process information, learn and make predictions in ways that closely resemble human cognition. This development reduces the boundaries between machines and human intelligence. 

The development of full artificial intelligence could spell the end of the human race.- Stephen Hawking

Bridging the Gap between Machines and Human Brain

The Integration of AI in neurobiology helps scientists use machine learning techniques to analyze complex imaging brain data, decipher neural patterns, and understand the mechanisms of cognition, perception, and behavior. AI algorithms can sift through vast datasets, identifying the patterns and correlations that may elude human observations.

Brain-machine interfaces establish the direct communication pathway between the brain and external devices. The use of algorithms can interpret neural signals and translate them into actions. It can even help individuals to control prosthetic limbs or interact with computers using their thoughts.

Just as Tony Stark’s suits amplify his abilities, AI amplifies our understanding of neurobiology, empowering us to delve deeper into the complexities of the mind.

AI is pivotal in advancing our understanding of neuroscience by providing powerful tools and techniques to simulate brain processes. Here are some key ways in which AI is used in neuroscience:

Brian Imaging Analysis: AI algorithm used to analyze data from brain imaging techniques such as magnetic resonance imaging (MRI) and electroencephalography (ECG) to identify brain regions involved in specific tasks or conditions. This is also employed to understand large-scale neural data decode brain activity and unravel the mysteries of brain functions, especially in conditions like Alzheimer’s, Parkinson, depression, and other mental disorders.

Neuroimaging Data Processing: AI methods enable the processing and analysis of large-scale neuroimaging datasets. They can automate tasks such as image segmentation, registration, and feature extraction, allowing researchers to extract valuable information from vast brain imaging data efficiently.

Cognitive Modeling and Simulation: AI techniques, such as artificial neural networks, build computational models that simulate specific cognitive processes, such as learning, memory, and decision-making. These models help researchers gain insights into the underlying mechanisms of brain function and test hypotheses about brain activity.

Data Integration and Fusion: AI algorithms enable the integration of diverse data sources, including genomics, proteomics, and neuroimaging data. It provides a more comprehensive view of brain function. By combining data from multiple modalities, researchers can gain a deeper understanding of the complex interactions within the brain.

Disease Diagnosis and Treatment: AI is employed to aid in diagnosing and treating neurological disorders. Machine learning algorithms can analyze patient data, including clinical symptoms, neuroimaging, and genetic information, to assist in accurate diagnosis, personalized treatment planning, and prognosis prediction.

Natural Language Processing (NLP) in Neuroscience: NLP techniques are utilized to extract and analyze information from vast amounts of scientific literature, enabling researchers to identify relevant studies, extract key findings, and discover new connections in neuroscience.

Ethical Considerations: Ethical considerations should be focused on while using  AI to ensure the well-being and autonomy of individuals are protected. In the future implications of AI and neuroscience, contemplating the possibilities of brain augmentation and mind uploading can be possible, so ethical boundaries should be established.

Conclusion 

In conclusion, by leveraging the capabilities of AI, researchers can analyze and interpret complex neuroscience data more efficiently and accurately. This collaboration opens new avenues for understanding the intricacies of the brain. It can help uncover novel insights and accelerate neuroscience research and clinical application advancements. The integration of AI and neurobiology holds great promise for unravelling the mysteries of the brain. And can improve the lives of individuals affected by neurological conditions.

References:

Malik, N., & Solanki, A. (2021). Simulation of the Human Brain: Artificial Intelligence-Based Learning. In Impact of AI Technologies on Teaching, Learning, and Research in Higher Education (pp. 150-160). IGI Global.

Rana, A., Rawat, A. S., Bijalwan, A., & Bahuguna, H. (2018, August). Application of multi-layer (perceptron) artificial neural network in the diagnosis system: a systematic review. In 2018 International Conference on Research in Intelligence and Computing in Engineering (RICE) (pp. 1-6). IEEE.

Monsour, R., Dutta, M., Mohamed, A. Z., Borkowski, A., & Viswanathan, N. A. (2022). Neuroimaging in the Era of Artificial Intelligence: Current Applications. Federal practitioner: for the health care professionals of the VA, DoD, and PHS, 39(Suppl 1), S14–S20. https://doi.org/10.12788/fp.0231

Surianarayanan, C., Lawrence, J. J., Chelliah, P. R., Prakash, E., & Hewage, C. (2023). Convergence of Artificial Intelligence and Neuroscience towards the Diagnosis of Neurological Disorders—A Scoping Review. Sensors23(6), 3062. MDPI AG. Retrieved from http://dx.doi.org/10.3390/s23063062

Also, Read: Brain Net Technology – An Attractive Digital Medium of Communication

Environment Conservation Journalism Award Nepal goes to Scientia’s contributor Gobinda

KATHMANDU: Gobinda Prasad Pokharel, One of the emerging science journalists from Nepal and Scientia Paistan’s active contributor, has been awarded the ‘Environment Conservation Journalism Award’ by the Department of Environment of Nepal.

The award includes a cash prize of Rs 50,000, which was handed over to him by Minister for Forest and Environment Birendra Prasad Mahato at a function organised at the department premises on Sunday to mark World Environment Day 2023.

World Environment Day 2023 was a reminder that people’s actions on plastic pollution matter. The steps governments and businesses are taking to tackle plastic pollution are the consequence of this action. It is time to accelerate this action and transition to a circular economy.

Pokharel has been covering various issues related to science and technology, wildlife, climate change, biodiversity and science policy in Nepal for over a decade.
Pokharel has been covering various issues related to science and technology, wildlife, climate change, biodiversity and science policy in Nepal for over a decade. Photo Gobinda

Pokharel has been covering various issues related to science and technology, wildlife, climate change, biodiversity and science policy in Nepal for over a decade. He has also contributed his writings to several national and international media outlets such as The Third Pole Online, The Xylom, Scientia Mag and others.

Pokharel also received the Environment Journalism Award from the Nepal Forum of Environmental Journalists (NEFEJ) in 2022 and the Science Journalism Award from the Ministry of Education, Science and Technology of Nepal in the same year.

Pokharel is a science and environment reporter for Kantipur Daily and secretary of the Nepal Forum of Science Journalists.

Also, Read: Will Nepal put together its flora details in the next seven years?

Countering climate change with the condemned cow

Thanks to the ardent and, dare I say, aggressive advocacy of the environmentalists of the past few decades, the notion that farming and, more specifically, livestock farming has a huge role in warming up the planet and accelerating global warming, is more or less internalized by the public.

The mainstream opinion holds that livestock and cattle are responsible for producing methane gas in the form of flatulence that enters the atmosphere as a heat-trapping agent. The figure lies above 16% of all greenhouse gas emissions. In case you’re unsure about the gravity of that statistic, it’s pretty serious.

For the most part, this idea is in absolute coherence with the scientific literature and is approved by many pre-eminent figures in the environmental science community. However, it is often quoted out of context, with no regard to the original apparatus of the study, inevitably leading the public to wrongfully and wholly demonize a very important part of our social and biological ecosystem: cattle.

As part or member species, they are so crucial to the sensitive ecosystem around us that their removal from the ecological scene has created problems of a scale so grand that they now threaten the very survival of humankind.

With increasing populations, declining soil health, dwindling wild and marine life, surging rates of infection outbreaks and aggravating geopolitical situations across the globe, the scientific community has spent the better part of the last decade looking for the Holy Grail of ecology i.e., finding harmony between nature and man in such a way that allows for mutual growth.

Multiple attempts have been made to delay the sentence of a perpetually altered planet in recent years. While some of them have brought transient hope, none have been more promising than this method. They key difference to be noted here is that while the rest of the methods have a very artificial, man-made touch to them, this approach is based 100% on mimicking nature. And this is what sets it apart from the rest: absolutely no chances of unwanted and unknown consequences since this is what evolution had intended in the first place.  

Meet the cast members

It would be absurd to think that the cows, sheep or goats can alone bring about the change. The second player in this process is the grasslands; think of the American Prairie, the Argentinian Patagonia, or the vast Mongolian Steppe.

Despite being separated by thousands of miles of terrain and ocean, all of them have something in common: they all have perennial grasses. They are all built to hold millions of grazing herbivores and are infinitely more valuable to us as carbon-sequestering warehouses than any forest on Earth. Allow me to explain.

Why Grasslands?

It is common knowledge that increasing vegetation in a small locale considerably affects the local weather positively. It increases the rate of transpiration in the land, thus bringing down temperatures and creating greater odds for rainfall. This process is known as changing the microclimate.

But when the microclimates of hundreds of adjacent locales are changed, the change appears in the macroclimate. Done on a large enough scale, this method can reliably modify the planetary climate in a suitable time span.

grasslands
It is common knowledge that increasing vegetation in a small locale considerably affects the local weather positively.

To change the microclimate of an area, vegetation must be introduced there. Towards this end, we have two means, forests and grasslands. The obvious choice would be and has been forests. However, forests are slow to grow and require a lot of resources to ensure survival in their early years, including a generous supply of water, which can become especially problematic in areas that already face water shortages. Forests also sequester carbon from the air into biomass at a relatively slow pace when compared to grasslands.

Forests are planted with considerable spaces left between adjacent trees, leaving the ground surface bare, allowing water to run off and carbon to escape the soil. Further, forests expand with excruciating tardiness in nature and given the rate at which climate change outstrips any green growth; forests are a losing gamble.

Another attempt at dealing with trees is to artificially plant them over large swathes by hand, which is not only not time-efficient and economical but also puts a significant burden on the water supplies in the area, which makes it impractical in areas where it is needed most. This is precisely why most of the third world is turning dry and dusty.

Enter grass! To count off some of the advantages it has over forests: It is both annual and perennial, so it grows very quickly into its mature form. I’m talking about a mere couple of weeks to reach its raging adulthood. It either scatters its seeds (quite efficiently) at the end of every season or simply grows back from any roots left from the last season, thus eliminating the need for replanting ever again. It is very aggressive in spreading to its surrounding and will happily annex any land available, even from other weaker strains of grasses.

This takes care of the expansion problem of the forests. Contrary to forests, grasses are tough right from birth and do not require to be pampered with a generous water supply from canals or rivers. They make do very well with the season’s rain.

And when rain does come around, the blades of the grass covering the ground surface prevent water run-off and increase the underground water reservoir by absorbing the rainwater. In this way, grass quite literally turns the tables on the forests by increasing the water available at the end of each season.

The cherry on top is that while forests can sequester up to 27 tons of carbon dioxide equivalents per acre in the form of biomass, grasslands sequester 49 tons of carbon dioxide equivalents per acre.

Camera, Lights, Action!

So, here’s how it’s supposed to work. Grass sprouts just in time for the rainy season. It gets a week to bask in the glory of its youth before a vast, roving herd of cattle (cows, buffaloes, sheep or even goats) arrive at its doorstep, huffing and puffing and looking very hungry.

They begin making short work of the grass, but they do not linger long enough to uproot it from the ground since they are exposed to potential predators. In a more modern setting, the role of the predator can be filled by a particularly bad-tempered shepherd or an enthusiastic border collie. The herd keeps moving, and the lower parts of the grass live to see another day.

Grazing off the top of the grass allows it to grow back again in a month or so instead of just turning dry and yellow. But the herd does not depart without leaving a considerable tip for the grass. An average cow drops from 50 lb. to 100 lb. of manure daily.

And that’s just one cow. Imagine the aftermath of an entire herd. Needless to say, despite the unattractive image it conjures up, this natural fertilizer enriches the land with all the necessary minerals and microflora and sends the local vegetation into a hyperdrive.

Grazing off the top of the grass allows it to grow back again in a month or so instead of just turning dry and yellow.
Grazing off the top of the grass allows it to grow back again in a month or so instead of just turning dry and yellow.

As the first pillar of the food chain is strengthened, the wildlife returns to the region and the biomass output of the land multiplies several-fold. And as a parting gesture, the herd tramples any drying grass, which reduces the dead and dying grass and levels the blades onto the ground, creating a perfect platform for rainwater absorption. The herd moves on to a fresher patch of grass and does not return to the original until it is fully grown again.

Trampling the drying grass further produces positive results as it reduces the need for fire to clear out a plot for next year’s growth. An estimate put the amount of pollution released by burning one hectare of grassland equivalent to fumes from 6000 cars.

To put it in perspective, every year, more than 1 billion hectares of land are burned in Africa alone. Replacing fire with cattle would not only take care of the old growth but also retain the organic matter in the top layer of the soil, saving the land from mineral and nutrient deficiency.

Tried and tested

This method is not very new. In its modern understanding, it has existed for more than three decades and has been implemented by its pioneer Allan Savory in over 21 million hectares of degrading land with incredible success.

But in a world where one-third of its land or 400 million hectares are desertifying simply because the herbivore and the grassland have been divorced, 21 million is just 5.2%. This goes to say that while 21 million may be a good start it is certainly not a celebratory milestone. And getting here was not without its tragedies.

Allan Savory recalls his young self with his orthodox ideas about wildlife and its place in an industrializing world; he recalls the time he suggested to an African government that the reason the land was deteriorating was, simply put, “too many elephants”.

Seeing no resistance from the scientific community, the government had over 40,000 elephants shot and killed in the next few years. Savory would go on to call this “the saddest and greatest blunder of [his] life”.

Seeing as removing the indigenous animals from their habitats only worsened the problem instead of solving it. Following that, Savory has dedicated his life to finding a better, more humane, more economical and more “natural” way around this problem. And this method of rotational grazing, or “Holistic Planned Grazing” as Savory calls it, is his magnum opus. And the results that it has achieved are nothing short of miraculous.

Waiting for the stars to align?

The question you may ask is, if it works well, why is it not everywhere already? This is the same question Allan Savory and others who have followed suit want to ask of everyone else. The YouTube video of Savory explaining his science has garnered over 5.7 million views, yet very few responses have been received from the international stage.

Part of the problem lies in the increasingly separating communities within the landscape. For Holistic Planned Grazing to work, it really needs to mimic nature; having a vast area of grassland and herds of up to tens of thousands of animals are optimal.

This is where private ownership and general refusal to cooperation owing to personal interests create hindrances in achieving the desired outcome. Communities, such as the Maasai in Kenya, where people have overcome their selfish interests for the greater good of the region, have not only seen greener pastures but also healthier livestock, higher water table, fewer droughts and more wildlife in return.

This practice undoubtedly pays great dividends far into the future, but the question here is of coming together actually to implement this. The onus is no longer on science – it has delivered its research – but rather on the international community to make sure the wisdom is followed through. The cow has exonerated herself; it is now our turn to meet it halfway.

References

Also Read: ROLE OF GENETICS IN INFECTIOUS DISEASE SPREAD

Beyond the Brick and Mortar: How Ancient Homes Were Cooler Than Modern Ones

As this year’s summer rolls in, it brings with itself fears of hot and humid hours of no escape from the angry sun and no air conditioning as power is cut intermittently across the country, sometimes for several consecutive hours. Suppose you’ve ever sat through one of those episodes where the electric supply from the company is cut off at midday with no backup available. In that case, you likely are well aware of the ensuing mania that engulfs the subject of this treatment.

And anybody made familiar with this situation begins to understand the fundamental flaws in our modern architecture, which cause the buildings to be very energy-inefficient and subject to fluctuating temperatures. You find yourself horrified at the sheer potency of the hot weather and begin to sympathise with the people with no access to electricity.

You may even have included your ancient forefathers in the list of people who had it tough regarding the weather. Here is where you might not know something about the ancient people of Indo-Pakistan. They were actually very well accommodated within their locales, no matter the climate.

It is a surprise to many of us that hundreds and thousands of years ago, people living in the exact geographical locations as ours were living just as, if not more comfortably, under the same weather. Whereas our modern houses and buildings start to feel like a pan over the stove as soon as the calendar hits May, ancient buildings retain constant cool temperatures throughout the season.

And all this with no air conditioning or electricity consumption. The ancients had slowly learned and built up their art of indigenous architecture over several centuries, allowing them to build the perfect passively-cooled homes without ever needing electricity. 

In fact, their techniques were so excellently adapted to their locations that scientists have been looking to implement their techniques in urban construction to reduce energy wastage and greenhouse gas emissions. According to International Energy Agency, it is reported worldwide that buildings consume almost 34% of the energy produced annually (almost 153 quintillion joules)1. In light of this incredible statistic, here are a few ways in which modern architecture could benefit from the distilled wisdom of the ancients.

Building Materials

It would not be an overstatement to say that building materials are perhaps the most crucial factor in determining the eventual “thermal comfortability” of a building. The material used in roofs, ceilings, walls, and floors should show up in budget tracking and be considered for its eventual insulating and cooling abilities. 

Modern buildings extensively use cement and steel for the bulk of the structure. While widely available and structurally durable, they are a recipe for a thermally-inefficient outcome. Studies have shown that these materials are not only practically ineffective2 at keeping out heat when compared to traditional materials but also contribute a weighty 13.5% of global CO2 emissions annually (almost 4.9 billion tons)3

Compared to this, traditional materials sourced locally – thus saving the need for extensive transportation and processing – not only have a smaller carbon footprint but have also been proven to have specific cooling properties. For instance, a study published in 2019 found that using limestone in buildings reduced summer heat gain and winter heat loss due to its innate physical properties4. Similarly, using wood, straw, clay, and other naturally occurring materials also helps decrease the cost and temperature.

The ancients had slowly learnt and built up their art of indigenous architecture over several centuries, allowing them to build the perfect passively-cooled homes without ever needing electricity. 

Ancient architecture
An ab anbar (water reservoir) with windcatchers (openings near the top of the towers) in the central desert city of Yazd, Iran

Ventilation

It is common knowledge that a well-ventilated room will fare much better than a closed-off space when it comes to fighting off the radiating heat. However, natural ventilation as an alternative to electrically powered air-conditioning is a rarely sought option, mainly because of the lack of expertise in utilising the natural flow of air. Despite the availability of the knowledge of convection -that hot air rises to the top, while cold air sinks to the bottom -very few modern buildings use this dynamic to reduce the cost of air-conditioning and its associated impact on the impact.

While the ancients were arguably unaware of the precise reasons for the convection phenomenon, they calculated their architecture around it. This would include the placement and size of the windows, the installation of wind-catchers in ceilings, and the placement of smaller openings along the lower half of the walls. In Shahjahanabad, India, windows of small diameter are placed strategically on the walls at ceiling and floor level. This allows for the cold draft to enter the room from the ground-based windows and the hot air to exit from the top5

Similarly, 300 years ago, the people of Delhi were placing paper or straw screens soaked in water in the doors and windows of their homes. This caused any and all air passing through the screens to cool down by evaporation6.

Vegetation

While vegetation and leafage certainly serve to bring about a sense of peace and beauty to any architecture, it should be noted that aesthetic value is not the only benefit it renders. According to a 2014 study, vegetation cover on the ground, including grass, shrubs and other herbage, tends to reduce the potential summer temperatures, while a “green roof” – which is the presence of foliage cover provided by plants to the roof – serves to reduce the cooling load of the building by a considerable percentage7

The ancients extensively used vegetation and greenery in their buildings and surroundings, even if not by choice. Their towns and cities boasted a much greater green cover than our cities do today. The shade provided by the trees helped block direct sunlight, which kept the indoor temperatures low. Combined with the transpiration and evapotranspiration that resulted from the verdure, the resulting temperatures were shockingly cool. 

Even today, closely packed urban areas are, on average, 1-3°C warmer during the day and almost 12°C warmer during the evening8 than a rural settlement which is broken up by vegetation allowing for ventilation and providing shade. This phenomenon has been dubbed the “Heat Island Effect” by researchers.

Insulation

A prevalent passive method of reducing or even curbing heat exchange across a boundary is adding layers of non-conducting material. The best non-conductor of heat is air. We can minimise heat exchange by using porous materials that can hold air in large quantities in a small space. Using polystyrene, fibreglass, cellulose, and mineral wool as the insulating materials can reduce heat gain to the house interior.

Although the ancients did not have access to any of these modern insulating materials, they used natural fibre in their buildings, such as dried grass in thatched roofs and straw in the adhesive and plaster for their walls. Both grass and straw create natural insulation due to their hollow structure, bringing about lower temperature fluctuation in the interior of the house throughout the day.

Today, the herdsmen of the Mongolian steppe use wool as a tarp for their yurts and wool carpets to prevent heat loss from the ground. It works perfectly for them during the harsh winter, and with some science, it can work for us during the hot South-Asian summer.

In addition to that, ancient fortified buildings also made use of thicker walls, which delay heat exchange and provide more relaxed interiors. There was also the practice of leaving the space between two layers of walls hollow. Such an architectural feature can be found in the buildings of the Mughal era, such as the Badshahi Mosque of Lahore, Pakistan. 

In some “heat capitals” of the world, notably in the Northern and Western Subcontinent, the custom of having subterranean chambers was prevalent. This used the surrounding soil to act as insulation and provided a cooler escape during the hot days of summer9.

Ancient fortified buildings also made use of thicker walls, which delay heat exchange and provide more relaxed interiors

Exterior Additions

Arguably exterior additions do not contribute as much to the total energy saving of the building; however, their effect cannot be refuted entirely. As seen in a 2017 study, the mere presence of a body of water near or around the building reduces the atmospheric temperature. This could be in the form of a lake, a moat, or even a courtyard fountain10. The ancients who settled on the shores of seas and the banks of great rivers enjoyed the perks of the evaporative cooling caused by the water.

Moreover, the presence of stone or wood latticework around many medieval builds helped reduce the surface area of direct sunlight while allowing ventilation. Such “Jaalis” are ubiquitous in Indo-Islamic architecture. Their modern form is called a Brise-soleil, which is now a part of many modern buildings.

Finale

The lifestyle of the generations before us is not very attractive to most of us, and while it is a good idea, in general, to move forward and live in the present with our own unique identity, it is crucial not to deny any and all credit to the people of the past. Indeed, some of their architecture has stood the test of time, and despite nature’s continuous wear and tear, their monuments stand tall. 

There is shrewdness in studying the knowledge of the past and applying it as a modern concept, for despite the inconceivable technology gap between them and us, there will always be something we can learn from them.

References:

  • 1 https://www.iea.org/reports/buildings
  • 2 Pandit, R. K., Gaur, M. K., Kushwah, A., & Singh, P. (2019). Comparing the thermal performance of ancient buildings and modern-style housing constructed from local and modern construction materials.
  • 3 https://www.imperial.ac.uk/news/235134/greening-cement-steel-ways-these-industries/
  • 4 Sharma, A. K. (2019). Evaluation of different building designs to enhance thermal comfort or comparative study of thermal comfort in traditional and modern buildings.
  • 5 Gupta, N. & Centre for Energy Studies, Indian Institute of Technology Delhi. (2017). Exploring passive cooling potentials in Indian vernacular architecture
  • 6 Dalrymple, William. (2006). “The Last Mughal: The Fall of a Dynasty, Delhi 1857”
  • 7 Perini, K., Magliocco, A., Effects of vegetation, urban density, building height, and atmospheric conditions on local temperatures and thermal comfort. Urban Forestry & Urban Greening (2014)
  • 8 https://www.smithsonianmag.com/science-nature/city-hotter-countryside-urban-heat-island-science-180951985/
  • 9 Dalrymple, William. (2006). “The Last Mughal: The Fall of a Dynasty, Delhi 1857”10 Subramanian, C. V., Ramachandran, N., & Senthamil Kumar, S. (2017). A Review of Passive Cooling Architectural Design Interventions for Thermal Comfort in Residential Buildings

Also, read: Climate Change and Residential Buildings – The way forward