Frank Herbert’s nuanced classic and futuristic epic “Dune” has been adapted into a science fiction magnum opus by Canadian filmmaker Denis Villeneuve. Being a more critically acclaimed version, the film profoundly portrays Herbert’s omniscient world-building in an astounding cinematic spectacle.
The novel, being a million-dollar best-seller, is considered “one of the greatest science fiction novels of all time.” Adapting it onto the big screen is no less than a miracle, as Villeneuve manages to cover the 800-page complex life of Paul Atreides in Arrakis over two hours and thirty-five minutes of run-time.
Vast dunes of Arrakis. Photo: Freepik www.freepik.com
The film takes place in a futuristic setting in the year 10191. The story follows Paul Atreides played by Timothy Chalamet who is sent to the planet Arrakis with his parents, Lady Jessica and Duke Leto Atreides, played by Rebecca Ferguson and Oscar Isaac, to take over after the former rulers, the ruthless Harkonnens. Arrakis, informally known as Dune, is a fictional planet featured in Herbert’s novel and is integral to the film’s central plot for several reasons.
Despite being inhabitable and protected by massive sandworms, it serves as grounds for harvesting “spice” as whoever has control over Arrakis has control over spice. Additionally, it is a passage to interstellar travel, of which only the savage Fremen hold knowledge. As Paul’s fate as future ruler hangs in the balance, he navigates the impenetrable landscape of Arrakis with the help of the Fremen, letting the Dune saga unfold.
The casting is impeccable as Timothy Chalamet brilliantly embodies the role of Paul, a teenage boy, who is unaware of how to utilize his position after his father, Duke Leto. Lady Jessica played by Ferguson effortlessly personifies Paul’s dominant mother and her exposition of Bene Gesserit in the novel is equally impressive and a personal favorite.
The film also does a commendable job of incorporating intricate themes to complement the film’s central plot for example Paul’s prophecy as Muád’Dib, the discreet operations of Fremen, and the sinister plotting of the Harkonnens. The viewers are kept in the loop of various events occurring one after the other in the Dune universe which hint at something larger in the future. However, some events in the film may give a superficial impression to the viewers who have not read the novel in the first place.
Nonetheless, Villenuave compensates for this lack of context by showing Paul’s visions or dreams in which he meets Chani, another character, played by the enigmatic Zendaya. This meeting foreshadows Chani’s significance in Paul’s future journey amongst the Fremen.
Unlike the intended illusionary vision Herbert had for his novel, Villeneuve’s direction for Dune differs, as Villeneuve directs the movie in more of his signature style as seen in his prior films namely Blade Runner 2049 and Arrival. David Lynch, on the contrary, directed Dune in the year 1984, per the novel’s psychedelic ambiance.
With a distinct futuristic cinematography style, Villeneuve carefully orchestrates wide-shot images of sandstorms, spaceships, and Paul’s visions through the equivocal and extraordinarily gifted cinematographer, Greig Fraser. Both Fraser and Villeneuve make use of long monologues in the novel to visualize Arrakis mostly from Paul’s point of view throughout the film.
My favorite aspect is the utilization of a visceral monochromatic color palette which goes hand in hand with Greig’s shooting style. For this purpose, Greig particularly insisted on the use of shooting the scenes in natural light with sand screens to create proper reflection. In addition to this, both Fraser and Villeneuve experimented with adjusting camera angles for shooting technical scenes such as the scene with the ornithopter spinning around the worm.
The soul of Herbert’s book lies in establishing a link between the ecosystem and spirituality and the director-cinematographer duo masterfully engineer this phenomenon through specific shots in the film. Greig’s ability to fully immerse himself in various cultures and capture relevant moments coupled with Villeneuve’s scientific documentation takes a coming-of-age approach to this book-film adaptation and is indeed genius, if you ask me.
Secondly, if we were to talk about the costume design, the outfits for every character in the film are meticulously selected according to their character arcs. They are crisp, antique, and seem like they weigh a ton, at least!
Ornithopters in Arrakis. Photo:Freepik www.freepik.com
If you think production design and cinematography are the icing on the cake, wait till you are mesmerized by the stunning editing and soulful music score. German film composer, Hans Zimmer, reunited with Villeuneve for Dune after Blade Runner 2049 for this project. Despite Zimmer’s former supposed commitment to Cristopher Nolan’s Tenet, he picked Dune instead owing to his childhood love for Herbert’s novel. Paired with BAFTA winner Joe Walker’s intuitive and timely editing, the film creates its rhythmic beats, certainly serving the cherry on top!
This book-film adaptation stands out because of Herbert’s take on the Bedouin culture, which is intricately woven into the characters’ lives in the Dune universe. As a Muslim viewer, reading the novel and watching the film in the theater kindles many emotions as you relate to the Islamic themes of dynasty politics, pilgrimage, and prophecies. It is both surreal and nostalgic. Consequently, to include an Islamic narrative in a pro-westernized film fraternity only to take it up as a massive budget project had me sold for the first day, first show!
Dune is a relatively slow-paced film and can be watched without reading the novel. The film’s marketing, however, was overblown but that might have been necessary to compensate for the large-scale investment in giving life to Herbert’s Dune universe. As most scenes are infused with imagery and colossal visuals, the prospects of experiencing individual character arcs, except for Paul, relatively become less.
Though, for sci-fi zealots like myself and those who have long treasured Herbert’s Dune, Villeneueve has certainly done justice to the book-film adaptation we all dreamt of. All in all, Dune is a bang for the buck! It’s both an auricular and visual feast with an ensemble cast, leaving viewers craving for more—be sure to watch part two where the dunes of Arrakis promise even stronger battles and revelations!
Nandana is situated in District Jhelum, Pakistan, about 60 miles southeast of Islamabad in a straight line and can be reached by road in less than 3 hrs. In the long past, Nandana was a Capital city of historic importance and also was an administrative district until the second half of the ı8th century A.D. It had remained more or less inhabited up to the 18th century but was abandoned thereafter and the population shifted to Baghan wala down below in the plain.
Presently the site is abandoned and has recently been protected by the Department of Archaeology. To this day, as one visits it, one cannot miss the conspicuous sight of the high mount or its peak point to which Beruni had once claim bed up to take measurements for his experiment. These stand out clearly in the light of Beruni’s own observations.
For a long time, Beruni was anxious to access the sources of Hindu literature, astronomy, and other sciences. During the period of his service (399-406 A.H) with Abul Abbas Mamoon Khwarizmshah, he became better acquainted with the power and position of Sultan Mahmud and the importance of the Ghazna court as a gateway to India.
During his stay at Nandana, Beruni had accomplished more than one task. By halting at Nandana and making it a center for his inquiries, Beruni extended his visits into the surrounding region, which was also rich in minerals, and Beruni was equally interested in precious stones.
In Nandana, Beruni observed the latitude of the place, which he noted in his Kitâb al-Hind, and the latitudes of other places he had personally visited. The figure for Nandana (as given in the printed edition) is 32′-o. 60 Later, Beruni also calculated the longitude of Nandana (from the westernmost Coastal point of Maghrib, North Africa), revised the figures for its latitude, and recorded both figures in his al-Qanun-al-Masudi. In the printed edition of this work, the longitude and the latitude of ‘Fort Nandana’ are 94° — 43′ (E) and 330— 10′ (N), respectively.
Diagram illustrating a method proposed and used by Al-Biruni to estimate the radius and circumference of the Earth. Credits: Wikimedia Commons
Nandana: The Scene and setting for the experiment
Beruni’s approach to Nandana was natural from the northwest along the age-old route, traversing the roof-like elevated Ara valley and then descending south-eastward (from the sector of the present Ara village and the Ara Rest House) towards the Nandana Pass. The environment enrooted to Nandana, as it would have appeared and impressed Beruni, can best be visualized through the vivid description of it left by a modern researcher (Sir Aurel Stein) following the footsteps of Beruni: “I may now proceed to give an account of the route leading down from the Salt Range through the Pass of Nandana, and the remains of the ancient stronghold.
From the elevated ground of the Ara Plateau, at the height of about 2,400 feet, a steep winding road leads down over the rocky scarp of the range for close to 2 miles to where a small dip, about 200 yards across, at an average level of 1,300 feet stretches between two small valleys drained by streamlets which further south unite below the ruined stronghold of Nandana.
Immediately above the dip referred to, which forms a natural fosse, raises the bold rocky ridge of Nandana very abruptly. On its top, at the height of about 1,500 feet above sea level, it bears conspicuous ruined structures, and along the precipitous northern slopes below these, the remains of a boldly built one of the walls, defended by bastions.
This fortified ridge completely bars further descent on the route; for the two small valleys above-mentioned contract on either side of it into deep and extremely narrow gorges, and descend for some distance between almost vertical rock walls, hundreds of feet high.” 62 As one approaches the site, the rocky ridge’s northern slope on which stood the fortified inner city becomes prominent.
Before negotiating its bottom line, one passes through the ruins of the outer quarters of the city. Proceeding further and following the track higher up on the slope, the massive foundations of the fortification wall skirting around this northern side and the remnants of the gateway leading into the walled city.
Though Beruni had a partial view of the plain through the Nandana Pass from his own quarters, he could not fully view it unless he either went on the other side of the Pass or climbed up the mountain. He preferred to go up to the mountain top to size up the plain and the peak point where vertical measurement to its foot could be taken.
Temple at Nandana Fort. Credits: Dawn
To do so, he must have come out of the fortified part of the city, passed through the lower part of the city, traversed a long way toward the north-west, crossed the shallow rivulet waters flowing downwards into the gorge, climbed up the slopes of the spur along its north-western shoulder and reached high up on the top before he could have a full view of the plain. When he did so and had a full view of the vast level plain extending southward far off to the horizon, he took the final decision to try out his new method for determining the dimensions of Earth.
The problem and the method for its Solution
Al Beruni concluded that the method of finding, by trigonometrical calculation, the circumference or other dimensions of Earth by observing the dip of the horizon from the peak of a mountain was a fresh contribution by Beruni. He applied this method for the first time in his Nandana Experiment during 4 11-4 14 A.H.
He ascertained the sight line (extending) from the mountain peak (and) touching where the earth and the blue sky met (the horizon). The line so visualized from (my) standing position (on the peak) dipped against the (horizontally) fixed line by (an angle of) o° 34′. T h en I measured the (peak to bottom) perpendicular height of the mountain and found it to be 652-3-58751 zirac (cubits), reckoned by the z^âc used as a cloth measure at that place (Nandana). Now angle T is a right angle, angle K equal to (the angle of) the dip (° 34′), and angle H as its complementary = 89° 26′.
So if the angles of the triangle H T K are known, its sides will also be known by the proportion of T K and sines lotus ( = 1). By this proportion, T K will be 590 59′ 49″ while the excess between it and sinus lotus is o° o 11″. But that is the perpendicular height HL which is known in zirac, and the ratio of its (HL) zirac to the zirac of L K is the same as the ratio of o° o’ 11″ to 590 55″.
Recent advancements, such as China’s Tianyan-504 system and Google’s Willow chip, demonstrate substantial progress in designing qubits—quantum units of information that exploit superposition and entanglement to process data at scales unimaginable with classical bits. Although these devices remain technically challenging and limited in scope, the rate of improvement suggests a future in which quantum hardware may break through longstanding cryptographic defenses.
This shift threatens conventional encryption methods used in financial transactions, government communications, and personal data protection and reverberates through Web3 ecosystems. Decentralized platforms, blockchain-based financial instruments, and tokenized digital assets rely heavily on cryptographic primitives that could become vulnerable to quantum attacks. Organizations and communities worldwide now face a pivotal choice: adapt to this new reality by adopting quantum-resistant solutions or risk exposing their digital infrastructures to unprecedented threats in the years ahead.
Quantum Hardware and Algorithms: Redefining Security Threats
At the heart of quantum computing’s promise lies the qubit, a fundamental building block that can exist in multiple states simultaneously. Engineers struggle to keep qubits coherent for extended periods, maintaining them at cryogenic temperatures and shielding them from the slightest interference.
Error-correction techniques must delicately monitor these states without collapsing them into classical outcomes. The Willow chip’s refined error management and Tianyan-504’s large qubit count point toward more stable systems. However, scaling from a few hundred to thousands or millions of reliable qubits remains a colossal challenge.
Despite these hurdles, the theoretical capabilities of quantum algorithms pose grave implications. Shor’s algorithm, for example, drastically reduces the difficulty of factoring large integers—a cornerstone of RSA-based encryption. Breaking RSA in a reasonable timeframe would upend traditional public-key systems that currently protect sensitive information.
Similarly, Grover’s algorithm accelerates brute-force searches, potentially weakening symmetric encryption methods by reducing the time required to guess keys. Although present-day quantum machines cannot yet implement these algorithms at the scale needed to shatter modern encryption, the trajectory is clear: once error-corrected, high-qubit systems emerge, cryptographic assumptions once seen as unbreakable may fail.
This looming threat extends beyond conventional cybersecurity models. Adversaries might already capture encrypted data, but they plan to decrypt it years later when quantum hardware matures. Long-lived secrets—state documents, corporate intellectual property, or sensitive health records—become vulnerable to a “harvest now, decrypt later” strategy. The prospect of future quantum decryption elevates the urgency of preparing defenses today, rather than waiting for quantum supremacy to catch organizations off guard.
Quantum Vulnerabilities in Web3 Ecosystems
The Web3 movement envisions a decentralized internet powered by blockchain technology, decentralized finance (DeFi) protocols, non-fungible tokens (NFTs), and smart contracts executing across distributed networks.
These platforms depend on cryptographic mechanisms to maintain trustless environments, secure digital identities, and manage tokenized assets without centralized intermediaries. Private keys underpin the ownership and transfer of cryptocurrencies and tokens, while secure hashing functions and digital signatures preserve network integrity and ensure participants adhere to protocol rules.
Failing to address quantum risks may erode confidence in decentralized ecosystems, leading to market instability, devalued assets, and lost user trust. Photo generated by AI
Quantum computing threatens to undermine these foundations. If malicious actors harness quantum algorithms to derive private keys from public addresses or forge digital signatures, they could manipulate smart contracts, drain liquidity pools in DeFi applications, counterfeit NFTs, or sabotage blockchain consensus. The ramifications would be devastating for users who trust the immutability and cryptographic reliability of these systems.
The decentralized nature of Web3 complicates the defense. Network-wide algorithmic upgrades require consensus among diverse participants—miners, validators, developers, and token holders—making transitions to quantum-safe cryptography a complex social and technical endeavor. Failing to address quantum risks may erode confidence in decentralized ecosystems, leading to market instability, devalued assets, and lost user trust.
Quantum-Resistant Cryptography and Defensive Strategies
Anticipating these challenges, researchers and standards bodies have focused on post-quantum or quantum-resistant cryptography. Unlike current methods that rely on problems easily solved by Shor’s or Grover’s algorithms, quantum-resistant schemes emerge from different mathematical foundations. Lattice-based cryptography, for instance, exploits the complexity of finding short vectors in high-dimensional grids. Code-based systems use error-correcting codes to present problems resistant to known quantum approaches. Multivariate cryptography and hash-based signatures add further variety, each grounded in assumptions that remain robust against quantum assaults.
International efforts, including those led by the U.S. National Institute of Standards and Technology (NIST), aim to standardize these new algorithms. The selection process involves rigorous security analysis, efficiency testing, and implementation checks. Once a stable of proven quantum-resistant algorithms is established, migrating classical and decentralized systems to these standards will become a priority. Financial institutions, government agencies, and Web3 developers can then adopt these algorithms to safeguard future transactions and communications.
On top of cryptographic shifts, other quantum security tools offer additional resilience. Quantum key distribution (QKD) uses quantum states to exchange keys securely, revealing any eavesdropping attempt. Though challenging to implement at large scales and not a panacea, QKD could complement quantum-safe encryption methods, establishing a multilayered defense for critical connections. Meanwhile, quantum-secure protocols might enhance authentication systems, detect anomalies more efficiently, or ensure data integrity, turning quantum principles into defensive assets rather than threats.
In the Web3 arena, upgrading smart contracts to incorporate quantum-resistant keys and adjusting hashing algorithms become vital tasks. Developers may deploy hybrid approaches, mixing classical and post-quantum cryptography to ensure backward compatibility while incrementally strengthening security. Such gradual transitions help prevent sudden shocks and maintain user confidence.
Protocols might establish timelines for phasing in quantum-safe schemes, ensuring that wallets, node software, and decentralized applications support new cryptographic primitives. Photo generated by AI.
Navigating the Post-Quantum Transition and Future Outlook
The current limitations of quantum hardware give defenders a valuable head start. The quantum machines of today, including Tianyan-504 and Willow, remain at a proof-of-concept stage, still grappling with error rates and coherence issues. Yet, ignoring this window of opportunity would be shortsighted. Organizations must inventory cryptographic assets, identify vulnerable algorithms, and plan orderly migrations to quantum-resistant solutions. The cost of inaction grows with each step quantum computing takes toward feasibility.
For Web3 communities, consensus-based upgrades may require on-chain governance votes or carefully orchestrated forks. Protocols might establish timelines for phasing in quantum-safe schemes, ensuring that wallets, node software, and decentralized applications support new cryptographic primitives. This collaborative adaptation maintains the core principles of decentralization—open participation, transparency, and stakeholder input—while strengthening security foundations.
Ultimately, quantum computing’s influence on cybersecurity and Web3 can be managed through foresight and preparation. Rather than reacting to a crisis once a powerful quantum machine is unveiled, the global community can adopt preventive measures now. Incorporating quantum-safe cryptography, experimenting with quantum-secure protocols, and preparing migration paths for decentralized networks position organizations and users to weather the quantum transition.
This proactive stance preserves the integrity and functionality of financial services, digital marketplaces, and governance mechanisms that define the decentralized internet. It reassures participants that their assets and identities remain protected even as computational frontiers expand. Quantum computing may reshape cryptographic challenges, but with careful planning and timely implementation of new standards, the promise of a secure digital ecosystem—classical or quantum—can endure.
Yesterday, on Christmas Eve Dec. 24, at 6:40 AM EDT, NASA’s space probe, “Parker Solar Probe,” approached the closest to the Sun, with an approximate distance of 3.86M Miles. The spacecraft was launched in 2018 to observe our sun and its “Outer Corona”. Interestingly, the probe is the fastest object ever built by humanity, as fast as 690,000 km/h or 191 km/s, nearly 0.064% of the speed of light.
Dr Nicola Fox, head of science at NASA, told BBC News: “For centuries, people have studied the Sun, but you don’t experience the atmosphere of a place until you actually visit it.
Since the closest approach is 3.8M miles (6.2M km) from the surface of the Sun, that doesn’t sound close. Still, NASA’s scientist Dr. Nicola Fox puts it like this, “We are 93 million miles away from the Sun, so if I put the Sun and the Earth one meter apart, Parker Solar Probe is four centimeters from the Sun – so that’s close.”
The probe has been designed to withstand high temperatures of up to 2,500° Fahrenheit (1,370° Celsius) with its Thermal Protection System. During its flyby, the probe will endure temperatures up to 1,400C and this amount of radiation could frazzle its onboard electronics. It is protected by the shield, which is 11.5cm (4.5 inches), and made of carbon composite, and the spacecraft is expected to tactically get in and out fast, with the highest possible speed ever achieved by any human-made object yet.
In fact, in human terms, this is the equivalent of catching a flight from New York to London in less than 30 seconds, with a mindblowing speed of 430,000mph. This massive speed comes from the gravitational pull it faces from the sun, as travels through the Perihelion.
Photo: NASA
One of the very longstanding mysteries is the outer atmosphere of the Sun, which we call the “Corona”, solar physicists know that “The corona is really, really hot and we don’t know its explanation” – Shockingly, the surface of the Sun is about 6000C(5772K) but its corona, the outer atmosphere is measured to be millions of degrees, which we see during solar eclipses. The question is, how is the atmosphere is more hotter?
Through this mission, scientists to understand the Corona – the constant solar wind bursting out of it. We also see these particles interacting with our Earth’s magnetic field and give us the views of beautiful “Northern Lights”. This also causes space weather problems, causes problems such as solar pressure for our satellites, electronics, and communication systems.
Interactions between the radiative and convection zones within the Sun’s interior contribute to heating our star’s corona. Astronomy: Roen Kelly
That’s why scientists, think, that “By understanding the Sun and its activity in detail, the space weather, the solar wind, we can understand more about its effect on our daily lives”
NASA scientists had waited for this Christmas for the Solar Parker Probe to reach the nearest flyby of the Sun, and still be ‘safe’ for future observations, and they recently tweeted, through @NASASun, “Parker is amid its flyby and can’t communicate with us until Dec. 27, when it will send its first signal to let us know it’s safe.” – Hopefully, it will be safe and help us explore the solar secrets to a more detail.
The Nobel Prize in Chemistry this year demonstrates the rise of the power of computational and AI tools to assist scientists towards greater inventions.
In the early 20th century, Alois Alzheimer, a psychiatrist and neuropathologist, observed some abnormal webs and tangles under the microscope in postmortem brain samples of people who suffered from early-onset memory loss (dementia). He, however, could not identify what these were made of. Over the years, scientists have identified these as clumps of misfolded proteins; the condition of dementia is now named after Alzheimer’s.
Proteins are chains of amino acids, one of the building blocks of life, so much so that many scientists believe that the formation of amino acids on earth is a significant step to the origin of life. There are thousands of proteins in the human body that perform diverse functions: name a function, and there is invariably a protein associated with it.
Intriguingly, proteins, made by forming chains of different combinations of a mere 20 amino acids can show this large diversity of functions. All boils down to the way protein is folded, or as scientists call ‘native structure’: how the string of amino acids is arranged in 3D space.
A protein can perform its assigned function only if properly folded. A denatured protein (one that has lost its 3D structure, like an open random coil) or a wrongly folded one doesn’t. Misfolding of proteins is linked to several debilitating conditions like Parkinsons’, Amyotrophic Lateral Sclerosis (ALS), and Alzheimer’s, as was observed in the microscope by Alzheimer.
There are however millions of ways in which a protein could fold. Imagine millions of rugged valleys over a vast landscape and the goal is to throw a stone into the deepest valley. The same analogy applies to finding the native structure of proteins. This is termed as the ‘protein folding problem’.
Christian Anfinsen of the National Institute of Health (NIH) in 1961, observed that denatured proteins can fold back to their original functional state in a matter of few seconds. That the proteins manage to do so in such a short time frame, even in the presence of a multitude of possibilities, is a paradox, now famously known as Levinthal’s Paradox after Cyrus Levinthal, a scientist at MIT who proposed this in 1968.
These observations suggested that the 3D structure is coded in the sequence itself and that some important physical forces are in play that direct the protein to be folded a certain way, making its most stable state (the native state) easily accessible, rather than searching for the stable state randomly.
Protein Structure: The History
Scientists have been working on identifying the structure of proteins since 1930. The landmark discovery was when Kendrew and Marx Perutz figured out the structure of myoglobin and hemoglobin the oxygen-storing and carrying proteins respectively, using X-ray crystallography, pretty much like photographing the atoms of protein with X-rays. Identifying the protein structure, though, is not an easy task. It requires weeks or months of painstaking experiments and analysis.
In the early days of crystallography, scientists would spend years trying to crystallize proteins to study their structures, and many proteins simply couldn’t be crystallized. Several more months were required to analyze the experimental outcomes and come up with a sensible structure. Over the years, the field of structural biology evolved, with people finding structures of more and more proteins. More advanced tools including cryoelectron microscopy and NMR were being used routinely to study protein structure.
John Kendrew (left) and Max Perutz with their model. Credit: Medical Research Council Laboratory of Molecular Biology, UK
Dr. Mohd Taher, a postdoctoral researcher working on proteins and enzymes, in the Department of Chemistry, University of Illinois, Urbana-Champaign, USA says, “Seeing is believing”. Although researchers were successful in identifying the protein structures, one question remained largely unanswered: given a sequence of amino acids, is it possible to predict the native structure?
This quest inspired a group of structural biologists to start a friendly competition every two years called CASP (Critical Assessment of Protein Structure Prediction), with the motive of enhancing the pace of the advances, where the participants used their models to predict structures of proteins whose structures are not yet publicly available.
Some of the earlier winners tried predicting the structures based on the physicochemical properties of amino acids and how they interact with each other to model how these interactions will direct the 3D structure formation. Some came up with the idea of looking at several related proteins to find the pattern of how similarly coded regions fold.
Yet others looked at amino acids that got mutated together during evolution and postulated that if they changed together, they should be close to one another influencing one another in the folded state. The success of prediction, however, remained bleak, mostly with less than 50 percent accuracy.
A gamechanger in protein folding problem
However, the CASP competition of 2020 was a game-changer in the field of structure prediction. Researchers from Google’s startup Deepmind, John Jumper, David Hassabis, and their team showcased their algorithm, AlphaFold2, built with improved deep learning algorithms, which used “transformers” to learn from hundreds of thousands of known protein structures, and used this learning to predict the structures of a new protein.
The earlier version, AlphaFold1, presented at CASP in 2018 with algorithms based on convolutional neural networks was placed among the first 5 competitors. Deepmind’s algorithm outshone other competitors by a large margin. The jury of the CASP was in for a surprise by the result in front of them: AlphaFold2 managed to produce structures that were more than 90 percent accurate on the tested proteins.
AlphaFold 2 performance, experiments, and architecture. Credit: Wikimedia
The team described that they designed novel “training procedures based on the evolutionary, physical and geometric constraints of protein structures.” In the study published in Nature, they discuss the structure of the neural network used to train AlphaFold.
“The complex layers of neural networks succeeded in learning the outcomes of the physical processes of protein folding, capturing effects such as the propensity of some amino acids to form certain shapes, like alpha helix and beta-sheets, and the interactions of amino-acids with the surrounding environment (water and other amino acids)”, says Taher.
AlphaFold had managed to predict the protein structure of an amino acid sequence in mere minutes as compared to experiments that took several months. “AlphaFold however, cannot replace experiments. The final validation requires an experimental structure determination”, says Dr. Natesh Ramanathan, Associate professor in the School of Biology and Center for High-Performance Computing (CHPC), Indian Institute of Science Education and Research, Thiruvananthapuram, India.
Talking about the significance of prediction tools in protein research, Dr. Natesh said “These computational tools help in speeding up experimental identification of protein structures, allowing researchers to focus on more advanced problems.”
Is AlphaFold memorizing instead of learning?
The success of AlphaFold in the accurate prediction of protein structures is no doubt one of the best examples of the AI revolution in science. However, there is still scope for improvement. A recent case study by a team at NIH, Bethesda, USA showed that AlphaFold fails to predict the structures of proteins that can switch shapes as part of their function.
They showed evidence that the algorithm at some point had started to memorize the patterns rather than learning them, leading to incorrect predictions for more complicated structures. Dr. Natesh says, “As is the case for any method of bioinformatics, AlphaFold too is only as good as the database it is trained on.”
However, when scientists rely on increasingly sophisticated computational tools to predict protein structure, there also comes a downside: it is quite difficult to decode what are the important factors that contribute to the final result. AI algorithm works as a black box that spits out protein structures, leaving the researchers still wondering what factors led to this structure.
The success of AlphaFold in the accurate prediction of protein structures is no doubt one of the best examples of the AI revolution in science. Credits: DeepMind
It is also not clear if the models have learned some new physics that humans have not yet figured out. It is an interesting question since machine learning algorithms are designed to identify patterns that might be invisible to humans. This might be the case, but it is difficult to tweak this information.
While the researchers can now predict more accurate structures, the fundamental questions, what the complete physics underlying protein folding is, and how the process happens so fast remain. According to Dr. Natesh, “In the A to Z of protein folding problem, steps B to Y are still unsolved”. But for many, many important applications, one can work with the output structure, without worrying much about how the algorithms zeroed in on it.
From prediction to design
While many were interested in solving the protein folding problem, David Baker of the Institute of Protein Design, University of Washington, wished to go a step further. One of the regular participants in CASP, Baker was working on protein structure prediction, developing an algorithm called Rosetta, based on modeling the interactions between amino acids to predict the structure.
He envisaged an idea, why not use the existing knowledge of preferences of protein folding, to design a completely new protein, a string of amino acids that might fold into a shape for a specified function? This is essentially the reverse problem of the one that AlphaFold addresses.
This problem is considerably different from protein engineering, which has been around for a while: modifying existing proteins to improve efficiency or perform new functions. Smaller steps in this direction were taken by other research groups by the end of the 1980’s, to make short strings of amino acids called peptides, inspired by naturally occurring proteins.
The arrangements were predicted taking into account that some of the amino acids are hydrophobic (molecules that stay away from water) in nature while some are hydrophilic (molecules that like to interact with water). But it was David Baker’s group in 2003 that succeeded in the remarkable feat of computationally designing an entirely new protein whose structure or amino acid sequence bore no similarities to the known protein structures.
“This was something that was never achieved before”, said Dr. Natesh. “Not only did they design a protein made of 93 amino acids (now called Top7) “de-novo” (meaning anew)using computational tools, but they validated it using crystallographic techniques.”
This had profound implications in many different fields including medicine, health, and biotechnology. A group in the Institute of Protein Design used computational protein design to develop a vaccine for the SARS-CoV virus. “It’s exciting”, says David Baker, in an interview on the Nobel Prize website.
Way Forward
Both these feats, which jointly won the Nobel Prize in Chemistry this year, demonstrate the rise of the power of computational and AI tools to assist scientists towards greater inventions. Together, they have opened new avenues for innumerable applications.
The timelines have been compressed drastically with the AlphaFold. Designing proteins for different functions ranging from medicines to molecules that catalyze difficult reactions, be it capturing methane or carbon dioxide from the atmosphere or helping break down plastics, could lead to sustainable solutions.
However, protein structure, albeit a significant aspect, is not the only one to address real-life problems. To create a new drug, information is required on how a drug interacts with a protein. One needs to understand the behavior of proteins in the more complex environment of the living cells. Scientists are already working on tackling these challenges, one step at a time.
You may have heard of Sherlock Holmes. If you don’t know, he is a detective who solves mysterious criminal cases by using exceptional deductive reasoning, observational skills, and scientific knowledge to analyze evidence. Such intellectual characters also exist in real life.
A proud forensic scientist, Dr. Shahid Nazir Paracha, has dedicated more than a decade to advancing forensic science in Pakistan. He has experience in both field investigations and laboratory analysis. Dr. Shahid has contributed to solving cases like homicide, rape, terrorism, personal identifications, etc. He also trained many professionals, leaving an ingrained mark on the forensic landscape of Pakistan.
Dr. Shahid is currently associated with the Department of Forensic Medicine, University of Health Sciences (UHS), Lahore. He is an adjunct faculty member at the Punjab University Law College. He is a special Editor of forensic science and criminology in the Journal of Basic & Clinical Medical Sciences. Before joining UHS, he served the Punjab Forensic Science Agency (PFSA) as a forensic scientist. Dr. Shahid’s expertise outspreads conventional forensics. In the book “Modeling and Simulation of Functional Nanomaterials for Forensic Investigation”, published in 2023, he discovers the use of nanotechnology in enhancing forensic precision and efficiency.
Dr. Shahid is currently associated with the Department of Forensic Medicine, at UHS Lahore.
In an insightful conversation, Dr. Shahid discussed his journey to forensic science, integrating forensic science and nanotechnology, and Pakistan’s position in adopting these changes. Here are some snippets from our engaging conversation.
Hifz: Thank you for sparing time from your hectic schedule. We will start with your journey, what inspired you to delve into the world of forensic science?
Dr. Shahid: This is quite tricky, and I came to this field accidentally. In 2011, we were the first and pioneer batch of MPhil Forensic Science in Pakistan. Before this, separate subjects were available, like Forensic DNA and Forensic Chemistry.
At that time PFSA was in the pre-operational phase and former Director General Dr. Muhammad Ashraf Tahir, an expert in this field came from the USA to establish PFSA in Lahore. With their help, the University of Veterinary and Animal Sciences (UVAS) Lahore, plans the first-ever program in Pakistan’s history in the forensic sciences.
Luckily, Dr. Ashraf Tahir supervised me for my MPhil project. Soon, after completing my MPhil, I got a job here at PFSA, and definitely, as we delve into real cases, I understand the strength of forensic science. I went to practical and perspective forensics or you can say that for the fascination of forensics, this is the science for justice. We perform hundreds of cases and are satisfied that with our help someone is getting justice.
In the meantime, along with the job, I continued my PhD in Forensic Science and was the first batch of PhD Forensic Science at UHS in 2015. I joined UHS as a full-time employee in 2017. I have been honored and privileged and this is all dedicated to my supervisors, and teachers who guided me this far.
Hifz: As you mentioned forensic science in Pakistan is traced back to 2011, and you are one of the country’s early forensic experts. What exactly is forensic science? What is Pakistan’s current position in the forensic world?
Dr. Shahid: Forensic science is a multidisciplinary field that applies scientific methods and principles to investigate crimes and provide evidence for legal proceedings. It involves the analysis of physical evidence such as DNA, fingerprints, bloodstains, firearms, and digital evidence, to reconstruct events and link suspects to crime. There are many subdisciplines Forensic Genetics, Forensic Toxicology, Forensic Chemistry, Digital Forensics, Crime Scene Investigations, and Ballistics.
Body and evidence marking at the scene of the crime (Credits: Cottonbro studio)
Pakistan’s law and order and justice conditions are unstable, having said that forensic science is crucial for ensuring justice, maintaining law and order, and reducing crimes in Pakistan.
To counter terrorism and crime control, forensic science is vital. It has the most advanced techniques for solving terrorism cases which without forensic science is not possible. To increase the efficacy of our judicial system in 10-12 years, there is a much need for every crime to have a forensic report.
With the help of forensic science homicide, sexual assault, and drug trafficking cases are being solved. It’s definitely of international standard as we have the privilege that at the PFSA which is a world-renowned and Asia’s biggest lab, where forensic methods are implemented with the standard protocol.
We claim that a report issued here by the PFSA Laboratory cannot be challenged anywhere in the world. Human rights are always neglected in Pakistan, a lot of our people are underprivileged. In the sexual assault cases like Zainab’s murder and there are many examples like this, if there is no forensic DNA technology, it is not possible for law enforcement agencies to reach the suspect and to encounter them.
The challenge is limited financial resources advanced technologies, and, a lack of skilled persons or experts. I already quoted the example that there are only 3-4 PhDs in pure forensic science in Pakistan, causing a technological gap.
Hifz: In recent times, you co-authored a book “Modeling and Simulation of Functional Nanomaterials for Forensic Investigation”. For those unfamiliar with the concept, how would you define nanotechnology and its significance in forensic science?
Dr. Shahid: Nanotechnology is the manipulation and application of material at the nanoscale level, typically between 1-100 nanometers. At this scale, materials exhibit unique physical, chemical, and biological properties. Nanotechnology is not a very advanced field or has a current past. In different fields, nanotechnology is being used like in chemistry.
With the help of nanomaterials, nanocomposites, or nanoparticles, materials are used to delve into the nanoscale typically 1-100 nanometers. We adopted nanomaterials that can be utilized and helpful in forensics.
For example, for fingerprint detection nanoparticle powders, normally we use simple dust or black powder which are magnetic-based or chemical-based. Nanoparticles like gold, silver, and zinc oxide enhance the visibility of latent prints and they are definitely environment friendly‑ they have very high sensitivity and specific results.
Latent fingerprint development and lifting kit. (Credits: Carolina Biological Supply)
In the DNA analysis, we use an expensive method for the STR analysis. Research is going on DNA analysis and how to obtain purified DNA from small and degraded samples more effectively than traditional methods, with the help of nanomaterials. Nanotechnology applies to drugs, toxins, biological evidence, trace evidence, explosives, and fire debris analysis as well.
The advantages of nanoparticles in forensics are that they are highly sensitive and specific. They provide rapid analysis with rapid test kits which are portable and can bring to the crime scene.
We don’t have a method to collect DNA and run Polymerase Chain Reaction (PCR) at the crime scene, but with the help of nanoparticles, portable devices will be available in the future. It enhances accuracy in the results and reduces the risk of contamination and false positives.
Hifz: A couple of weeks ago, I reviewed the literature regarding Chemical Terrorism and the Role of forensic science. In incidents of terrorism, detecting explosives in an open environment. Could you explain how nano-sensors enhance sensitivity and accuracy in this domain?
Dr. Shahid: Nanomaterials have revolutionized explosive detection by offering unparalleled sensitivity, specificity, and speed in identifying explosive compounds. Their unique property such as high surface area, and conductivity, makes them ideal for detecting trace amounts of explosives in complex environments like humid environments, or any other environment.
Nanomaterials can be used in explosives detection. Explosives release Volatile Organic Compounds (VOCs) that can interact with nanomaterials causing measurable changes in electrical, mechanical, and optical properties in the forms of color, electric, and magnetic current change.
Nanostructures or sensors are metal oxide particles, they are carbon nanotubes (CNTs), and graphene-based sensors. These are sensors and tubes available to detect changes in the organic compound when interacting with nanoparticles. The release of compounds causes measurable changes in electrical, mechanical, and optical properties.
Several types of nanoparticles involved in the detection include quantum dots, and metallic organic frameworks (MOF). They emit fluorescence when they react with the different types of explosive vapors in humid or other environments.
Nanoparticles or nanosensors for field detection use portable devices. Nanotechnology on-site detection lab-on-chip device, is our project and I can share that lab-on-chip are different types of chips coded with different types of quantum dots and nanomaterials. We can bring them to crime scenes and detect explosives rapidly in the field.
As far as chemical terrorism is concerned, nanosensors enhance sensitivity to the high surface-to-volume ratio and enhance electric properties. They have tailored functions and quantum effects with different surfaces. They have improved accuracy compared to the other chemical methods that can change color and detect the presence of explosive materials.
Nanosensors are designed to target specific molecules significantly for explosives. They have integrated with pattern recognition systems and environment-inhibited ability with rapid signal response. But if the environment is rough like in rain or moisture, the chemical method may have some problems but with nanomaterials, this is the most accurate and sensitive detection of trace explosives.
Hifz: Biological molecules and body fluids, these molecules and particles are very sensitive and present in trace amounts at the crime scene. Crime scene investigators usually take very much precaution while lifting them from the crime scene due to the fear of contamination. So how can we employ nanomaterials in detecting biological materials like DNA? Is it possible to use similar approaches for body fluids?
Dr. Shahid: Nanomaterials have shown tremendous potential in the detection of biological materials such as DNA and body fluids due to their unique properties including high surface area, and biocompatibility to functionalize for specific molecular interactions. They enable highly sensitive, rapid, and specific detection techniques. Which are particularly available in forensic science, medical diagnosis, and environmental monitoring.
The trace amount of blood (source of DNA) at the crime scene (Credits: Cottonbro studio)
Currently, we use different chemical methods based on strips and color change methods or chromatographic techniques with their limit and challenges. Nanoparticles can interact with DNA through various mechanisms like adsorption, and hybridization. This interaction is used to detect, quantify, and analyze DNA Sequence.
The approaches we use for nanoparticle detection are based on some costly methods. These are gold nanomaterials functionalized with complementary DNA probes and their optical property with hybridization occur for the calorimetric detection of DNA sequences. Graphite and Graphene Oxide have a strong affinity for single-stranded DNA (ssDNA).
Body Fluids like blood, semen, and saliva interact with specific biomarkers or chemical components. There are calorimetric and fluorescence sensors, surface-enhanced Raman Spectroscopy, which is the major technique in nanoparticles, graphite sensors, magnetic nanoparticles, and nanostructure biosensors. These are all for biological detection.
We use nanoparticles for biological detection for a speedy result. Which are portable and multiplexing. They can be used for enhanced stimulus detection of multiple targets such as the detection of different DNA sequences. There are some challenges like the high costs of gold nanoparticles but due to their stability and standardization, scientists are working to reduce their high cost.
Hifz: What are the most promising advancements in nanotechnology that will revolutionize forensic science?
Dr. Shahid: Definitely, Hifz, Nanotechnology is paving the way for groundbreaking advancement in forensic science by offering innovative tools and techniques that improve sensitivity. The advanced nana sensors are impactful and different types of portable surface-enhanced Raman spectroscopy devices, are internationally available worldwide.
For the enhancement of DNA analysis, magnetic nanoparticles are available for the isolation and purification of the complex samples, they obtain high-quality DNA. Gold nanoparticles are also an example. Nanoparticles for latent fingerprints are quantum dots, and metallic nanoparticles like gold, silver, magnesium, and other particles to enhance the fingerprints compared to the powder technique. They have a different impact and reveal fingerprints under UV light with high clarity; and nanoparticle luminescence.
For the body fluids like blood, semen, and saliva, there are biomarkers and advanced techniques available, like Lab-On-Chip. This is a different nanomaterial or nanosensor fitted on a simple, small chip, that is used on crime scenes for analysis. For drugs and toxins detection different types of nanoaerosol functionalized with molecular printed polymer to capacity and identify a specific substance are available.
Nanotechnology in crime scene reconstruction enhances the microscopic evidence nanotechnology. Nanoparticles for 3D imaging and mapping are also available. For microscopic trace evidence like Gunshot Residue (GSR), we can analyze GSR on a Scanning Electron Microscope (SEM). Fiber and Glass fragment analysis can be done by using nanotechnology.
Similarly, in forensic toxicology, nanomaterials improved the detection and quantification of toxic substances in biological samples especially. Nano Lab-on-Chip systems are being used for analyzing the blood, urine, and tissue samples. This is the faster and more accurate toxicological analysis and a cheap method as compared to High-Performance Liquid Chromatography (HPLC).
Hifz: Do you have any final thoughts or a message for young researchers aspiring to excel in this field?
Dr. Shahid: I think your batch is 1st undergraduate batch in Pakistan. In youngsters like you and your fellows, there is very much the craze of joining forensic science.
Forensic science is a multidisciplinary field, the undergraduate curriculum, courses are from physical, biological, and chemical science. I recommend a person who has an interest in a multidisciplinary field, can stay curious and resilient, he can adopt forensic science. Every piece of evidence tells a story— a jigsaw puzzle, and as forensic scientists, we work hard to solve the puzzle, complete the story, and reach the suspect.
Forensic science is all about ethics, this is not a common science where any mistake leads us to a simple report change. Here any mistake, corruption, or unethical practice leads to someone’s death. We must practice truth and ethical standards in forensic science.
Darwin says it’s monkeys or chimpanzees. The evolution from single-cellular organisms to multicellular ones resulted in the birth of humans. Based on that, neuroscience says we are similar to many who come before us, contributing to the pathway of the rise of humans. One of them is fruit fly or Drosophila melanogaster.
Although the fruit fly is tiny with 350 microns of length, 250 microns of height, and 750 microns of width, it shares 60 percent of our DNA including learning skills, jet lag, and Down Syndrome. They also communicate to their romantic partners the way we do. They sometimes experience trouble in navigation, respond to dark and light paths, avoid predators, and find food like humans.
Like us, they can also get drunk and have a similar aging cycle. They get excited and stressed as they have similar neurotransmitters, glutamate, acetylcholine, and dopamine.
Several studies have been conducted on fruit flies to understand the human body’s functioning and received ten Nobel Prizes. These studies helped scientists identify many similarities such as the effects of caffeine on their sleep cycles.
In fact, on 20 February 1947, NASA sent a fruit fly into space when they decided to launch living beings there to determine the impact of rays on human beings. The scientists launched fruit flies from White Sands Missile Range on a V-2 Rocket. In 190 seconds, they covered 109 kilometers of altitude and returned to Earth alive and unharmed, confirming that humans can go to space too.
Since the early 1900s, scientists have been using fruit flies for research experiments. In 1901, an infamous scientist, Dr William Castle used fruit flies for the first time after he found that studying them was easier than guinea pigs. His single step paved the ground for other scientists to demonstrate key phenomena in the human body via this insect.
In the meantime, Thomas Hunt Morgan demonstrated that genes are present on chromosomes via fruit flies. This work led him to win the Nobel Prize in Physiology in 1933. In 2000, in the Berkley Drosophila Project, scientists sequenced the genome of a fruit fly completely, led by Dr. Gerry Rubin.
Focusing on behavior and the brain, in 2003, Jeffrey C. Hall, an American geneticist, expanded the research on the fruit fly brain that was already in concentration since 1970. Seymour Benzer started with a core focus on mutation and circadian rhythm, Jeffrey discovered the pigment-dispersing factor protein that controls circadian rhythms and found that it is located in small ventral lateral neurons. For this, he won the Nobel Prize in Physiology in 2017.
Expanding research further in neuroscience, in 2023, Michael Winding – computational neuroscientist, published a connectome of fruit fly larvae’s brain in Science. Co-authored by other researchers including Carey E. Priebe, the team found that larvae have 3016 neurons and 548,000 synapses. The team took more than five years to create the complete connectome.
This year, after a decade of research, on October 2, 2024, a group of scientists from Princeton University, published a research paper in Nature. They mapped the whole brain of the adult female fruit fly or connectome has 139,255 neurons and 50 million synaptic connections.
It was the collaborative effort of FlyWire, a group of scientists from 146 labs of 122 institutions, primarily from the University of Cambridge and the University of Vermont.
The project is inspired by EyeWire, a crowd-funded group of scientists that mapped the retina of a mouse in 2013. Because of tech limitations, they have utilized AI tools to collaborate with other scientists. In 2012, the group publically announced the project, inviting scientists to contribute. FlyWire is an extension of the old project that utilizes the latest advancements in AI.
Mala Murthy, director of the Princeton Neuroscience Institute, and Sebastian Seung, computer science and neuroscience professor. The photograph is taken from a Princeton article. Photo Princeton University
Led by Sebastian Seung, a computer science and Evin neuroscience professor who was also part of EyeWire a decade ago, and Mala Murthy, a neuroscience professor and director of the Princeton Neuroscience Institute, the fruit fly project was a collaboration between Seung and Mala Labs and the Allen Institute of Brain Sciences.
In this project, the researchers sliced the brain into 7050 layers to take around 21 million photos of its brain in electron microscopy at Dervi Books Labs and Howard Hughes Medical Institute Jaelia Research Campus.
With the help of a Seung and Mala Labs-designed model, AI analyzed the images and traced every path of every neuron and synaptic connection to make predictions. During this phase, it made a few discoveries and predictions.
First, AI precisely predicted the neurons were activated due to touch and taste simulations. Unlike the regular pathway, AI found that information travels fast to the brain upon touch. Yet while moving to the brain, different factors affect the information such as mood, activity, and the stimuli giver.
Second, while constructing the anterior visual pathway, AI spotted the ocular circuit responsible for visual guidance to a fruit fly. To help in navigation, the insect has 50 compass neurons. Compass neurons are neurons that tile together deep in the brain within an ellipsoid body. These neurons effectively connect the ocular circuit with the eyes by sending information.
A part of the connectome of the complete brain, consisting of 75,000 neurons makes the visual system of the adult female fruit fly. Data source: FlyWire.ai. The image is rendered by Philipp Schlegel University of Cambridge/MRC LMB). Photo Berkeley
Third, AI identified that information flows to and from the brain through suboesophageal zone (SEZ). It receives information from all brain parts and sends signals to motor neurons.
AI speeds up the pace of this project by analyzing photos. Sebastian, a lead scientist in this project, said “It would take 50 thousand people years to complete the project without AI. The scientists proofread an AI model and compiled it to create a 3D map of a fruit fly brain.”
The scientists identified sugar-sensitive neurons in a fruit fly’s mouth, coded with green color, that send signals to the brain upon detecting something sweet. The green neurons activate other neurons that are coded with light brown. Some of them stimulate motor neurons to suck up the sugar. Photo Berkley.
Discovery of new neurons in the adult fruit fly brain
Based on the information received, the team classified neurons into 8453 types out of which, they discovered 4581 new neurons. During this research, the scientists found out that they could stretch the neural wiring of the fruit fly up to 490 feet which is longer than the blue whale.
“The mapping of the adult female fruit fly brain will surely pave the way to understanding our brain.” Sebastian says “Anything about a brain that we truly understand tells us something about all brains. Now, moving further towards the main goal, scientists in the E11 lab aim to map the mouse brain and make the dream a reality soon.”
Understanding the human brain is the most challenging task as it is the most complex organ. However, the progress since last year with the development of connectomes makes scientists hopeful that soon there will be a cure for major neurological diseases.
Science is deeply embedded in every aspect of our lives; from the smartphones we use daily to the medicines we rely on and the policies that shape our societies. However, the scientific community seems to be quite isolated from the general public. The most evident proofs of this phenomenon are misconceptions regarding vaccines, climate change, and other novel technologies. Bridging that gap is not just about simplifying complex ideas it’s an art that involves empathy, storytelling, and commitment to making science accessible to all.
In particular, this has been a transformative year for me. Communication is not just knowledge transfer, it is curiosity, empowerment, and trust between science and society. This article recounts some of the lessons I learned during my MPhil and Scinetia Pakistan Science Writing Internship Program Cohort Three, held from Sep to Dec 2024.
Discovering the significance of Science Communication
My MPhil research on superhydrophobic fabric sparked my interest in science communication. The study revolves around creating self-cleaning surfaces using Zinc oxide nanoparticles. As exciting as the scientific work was, I soon learned that the impact would be limited if I could not speak to the broader audience about why it matters. Concepts like nanotechnology and contact angles were hardly understandable to the non-technical populace, so this made me improve my communication skills.
It was a turning point when I joined the Scientia Science Writing Internship- Cohort Three. That’s where I learned the power of storytelling in science communication and engaged in hands-on activities to make complex scientific concepts accessible and engaging. Writing on topics such as quantum computing, astrophysics, and climate change helped me understand how to use analogies, narratives, and visual elements to reach a diverse audience.
Insights from the Scientia Webinars
During the two-month internship program, we had the opportunity to listen to world-leading science Journalists and science communicators and engage with them in Q&A sessions. Webinars with experts such as Rachel Jones and Alex Dainis filled the internship with invaluable insights into the art of science communication. Rachel Jones is the Director of Journalism Initiatives for the National Press Foundation. Jones has worked as a journalist and media consultant for the past 30+ years in the US and Africa, for companies including the Detroit Free Press, National Public Radio, Internews, the International Center for Journalists, Kenya’s Nation Media Group, and Voice of America.
Rachel Jones has many years of experience in health communication. Rachel has demonstrated to communities the power of storytelling in transforming scientific facts into narratives that resonate with the general public. From her early work in Chicago to her campaigns in Kenya, she has a strong focus on using journalism as a tool to raise social and health awareness and to prepare researchers and journalists to make well-informed changes.
Rachel Jones emphasizes the importance of making scientific information accessible to the public through storytelling, stating: “For too long we have treated data and research as this sort of alien thing or this formal thing… I want journalists to understand and embrace the use of science in storytelling and how they can make it more accessible to more people.”
Alex Dainis is a renowned science communicator and video producer with ten years of experience producing digital, educational content for the web. Her background includes a PhD in genetics from Stanford University. Her YouTube videos and Tiktok reels have reached millions of viewers worldwide.
In her webinar, Dr Alex concentrated on the significance of blurring the lines between storytelling and science to make complex concepts engaging and accessible. Her creativity in using media to make subjects like genetics and neuroscience easier to understand specifically inspired me.
She frankly discussed the academic challenges she faced in explaining ground-breaking research like DNA sequencing in space. Such an approach teaches and empowers people to participate in learning science activities to make science accessible to all.
Dr Alex encourages young people to embrace uncertainty and the dynamic nature of knowledge to support them in STEM communication. She said: “It’s okay to not know everything; as long as you are learning, you are on the right path.”
Applying these lessons, I started delving into diverse aspects of science communication. Writing about quantum mechanics was one such difficult yet fruitful experience. I drew analogies that compared, for example, quantum superposition to a coin tossed mid-air; this explains the abstract idea. Similarly, when explaining physics in terms of change, it was easier for readers to be convinced about its urgency when relating it to everyday things like rising temperatures and extreme weather.
One of the most important junctures for me in writing was on the topic of renewable energy technologies. I chose to tell my story on how such innovations can curb global challenges like climate change or energy crises. For instance, I described a scenario of a solar-powered village in a rural setting, where through science, lives can be transformed and one can hope for more. These stories connected me with the readers to turn abstract thoughts into real-life possibilities.
My other contributing area was science education; I created content that simplifies scientific concepts for students. I aim to inspire the next generation of scientists and innovators. This work has been deeply fulfilling, as it aligns with my goal of making science accessible to everyone, regardless of their background.
The Challenges of Science Communication
Despite its rewards, science communication has significant challenges. One of the biggest hurdles is balancing accuracy with simplicity. Oversimplifying complex concepts can lead to misinformation, while overly technical explanations risk alienating readers. I’ve experienced this challenge firsthand while writing about quantum mechanics. Simplifying concepts like entanglement without distorting their meaning required careful thought and creativity.
Blogs, podcasts, and social media have democratized access to information.
Another challenge is the misinformation storm on digital platforms. As Rachel Jones noted in her webinar, the role of the communicator extends beyond the dissemination of correct information, but also to dispel myths and gain the audience’s trust. It is a responsibility that I take seriously, trying to produce content that is both credible and engaging.
Digital platforms also bring opportunities and challenges. Blogs, podcasts, and social media have democratized access to information. In these platforms, communicators have the opportunity to reach an audience worldwide. However, concise, visually appealing content can vie for attention in an era of information overload. As Dr Alex Dianis’s advice, the use of visuals and analogies has been instrumental in helping me navigate this landscape.
Looking Ahead— My Vision
I aim to contribute to science communication in Pakistan for better scientific awareness. One of my key goals is to bridge the gap between researchers and the public by creating content that is both informative and inspiring. Whether it’s explaining the physics of climate change, the potential of renewable energy technologies, or the mysteries of quantum computing, my focus will remain on making science relatable and relevant.
I also look forward to working with educators, scientists, communicators, and organizations like Scientia Pakistan to develop resources to advance scientific literacy. I plan to organize workshops and online teaching classes for students and aspiring communicators about the art of storytelling in science.
Using platforms like blogs, TikTik, YouTube, etc to reach a bigger audience is another area in which I am passionate. This approach aligns with Dr Alex’s vision of utilizing visuals and short messaging as these are essential elements needed to grab people’s attention in today’s fast pace.
Playing the Peacemaker through Science Communication
Science communication is an art, a responsibility, and a bridge between science and society. It transforms complex ideas into stories that inspire curiosity, empower individuals, and drive positive change. As I journey on; I remain committed to making science accessible to all.
Here, every story told, every misconception corrected, and every idea simplified brings us closer to a world where science is not confined to laboratories or textbooks but belongs to the people. Combining the lessons learned from my research, internship, and experts, I look forward to building a future where scientific literacy is not just a goal but a shared responsibility, connecting science to society in meaningful and transformative ways.
NASA has launched an important mission to study Europa, one of Jupiter’s moons. The launch took place from Cape Canaveral in Florida on October 14. The spacecraft is expected to reach its destination by the early 2030s, traveling approximately 8 billion miles. Regular scientific research is anticipated to begin by 2031.
Dr. Nozair Khawaja is part of the Europa Clipper mission research team. He is also involved in Japan’s “Destiny Plus” mission. Originally from the Punjab province of Pakistan, Dr. Khawaja has previously worked on several space missions for the European Space Agency and NASA. Saadeqa Khan has conducted an exclusive interview with Dr. Nozair Khawaja to discuss the goals and significance of this mission.
The mission is all about the in-depth study of Europa and will enhance scientists’ understanding of potential life-sustaining environments on other celestial bodies beyond Earth. Photo, Dr Khwaja
What are the main objectives of the Europa Clipper mission?
According to Nozair Khawaja, the primary purpose of the Europa Clipper mission is to assess the habitability of Europa, one of Jupiter’s moons. He outlined three additional key scientific objectives of the mission.
Firstly, the mission aims to determine the thickness of Europa’s ice sheet and how it interacts with the underlying ocean. Secondly, researchers intend to gather data on the composition of Europa’s oceans and its mantle— the third objective focuses on evaluating the features of Europa’s surface geology.
Dr Khwaja said that the mission is all about the in-depth study of Europa, and will enhance scientists’ understanding of potential life-sustaining environments on other celestial bodies beyond Earth. He also clarified that while the media worldwide often suggests that the Europa Clipper spacecraft will search for signs of “alien life,” finding such signs is merely a secondary objective. The mission may instead focus on exploring Europa’s oceans for the necessary ingredients for life.
What is Dr Khwaja’s role in the mission?
Dr Nozair Khawaja spoke about the design process of a space mission, stating that all experts, researchers, and scientists involved in the project are commissioned to contribute. Dr. Khawaja is directly involved in this mission as a team member of “SUDA”, a specialized instrument onboard installed in the spacecraft for in-depth research.
He further explained that the team will analyze data from the Europa Clipper instrument to determine the composition of the sub-surface ocean on icy Europa— this device aims to identify components essential for life, known as biosignatures.
Does Europa have twice as much water in its oceans as Earth?
In 1997, the Galileo mission was launched to explore Jupiter’s moon Europa, which conducted 12 flybys and identified an ocean hidden beneath its icy surface. Europa’s ocean is estimated to be between 40 and 100 miles deep, making it about 16 times deeper than Earth’s oceans.
Dr Khawaja said that the sea salt water on Europa exists in a liquid state beneath a thick layer of ice, which is estimated to be 10 to 15 miles thick. Several hypotheses are proposed about the amount of water in Europa’s oceans. These hypotheses were based on critical data collected from the Galileo mission. Europa’s ocean may contain twice the total volume of water found on Earth. However, conclusive results will only emerge once the Europa spacecraft arrives and transmits vital information using advanced instruments.
Scientists have long been fascinated by Europa, one of Jupiter’s moons. Dr. Nozair Khawaja told Deutsche Welle that the Europa Clipper spacecraft, which is set to launch in October 2024, will be the largest spacecraft ever sent to study a planet. It will be equipped with nine advanced scientific instruments.
Dr. Khawaja explains that Europa is subjected to powerful gravitational forces from Jupiter as it orbits the planet. This gravitational interaction causes Europa’s icy outer shell and its mantle to flex and contract, leading to heat up Europa’s inner surface. This heat is essential because it helps maintain the ocean beneath the ice sheet in a liquid state.
According to Dr. Nozeer Khawaja, Europa’s interior is likely composed of silicate rocks, while its crust is made up of saltwater and ice. Just as microscopic life on Earth originated from chemical interactions between seawater and rocks, similar interactions may occur in Europa’s icy seas.
What scientific investigations will the Europa Clipper spacecraft carry out?
Dr Nozair Khawaja explained that the Europa Clipper spacecraft has nine instruments onboard. Some of these instruments are designed for remote sensing, which will help determine the thickness of the ice layer covering its oceans and the salinity of water. Europa’s salty ocean generates a secondary magnetic field that affects both the direction and strength of Jupiter’s magnetic field. Europa Clipper is equipped with instruments to monitor these changes.
Dr Khawaja emphasized that the Europa Clipper mission is particularly intriguing. Jupiter’s atmosphere is quite harsh due to radiation, making it challenging to collect data. Therefore, a flyby mission has been developed to conduct 50 flybys near Europa without entering its orbit.
According to Dr Khwaja, meteorites fall to Earth, although many are blocked by the planet’s upper atmosphere, but the situation is different in Europa. When a meteorite impacts its surface, material from Europa is propelled several kilometers into the air in the form of micrometer-sized particles. Instruments such as SUDA and mass spectrometers onboard can collect these particles. The data obtained will provide insights into the composition of Europa’s surface.
Dr Khwaja explains that this information will help determine whether the surface compositions are unique to Europa or are in contact with its icy oceans. Ultimately, only by analyzing this data will we be able to draw definitive conclusions about Europa’s habitability.
Note: The article was originally published in DW Urdu and re-published with permission of the publication and author.
The cosmetic sector is no exception in today’s world, where each industry strives hard to achieve eco-friendly sustainable alternatives. One latest trend gaining momentum concerning skincare is the use of enzymes which are natural biocatalysts, offering a gentler approach to beauty routines. Today several enzymes are used in skin care products having benefits like deep cleansing, anti-aging, exfoliation, moisturizing, and antioxidant effects.
Exfoliation has traditionally relied on abrasive approaches, such as the use of extremely harsh chemicals (acidic formulas with less than pH 5) and mechanical methods (microbead facial scrubs, abrasive sponges, and crushed apricot kernels). Exfoliation is one of the most important steps in most popular skincare routines (Korean 10-step skincare routine), and it involves the removal of dead skin cells to achieve a smoother and more vibrant complexion.
Enzymatic peels are becoming a favorite due to their mild and efficient method. Enzymes target dead cells that aid in maintaining the skin’s microbiome, leading to healthier skin. Moreover, there is a growing trend of formulating enzyme-based products to complement other natural ingredients; thereby, amplifying their skin-rejuvenating benefits [1].
Different enzymes are used for exfoliation because they effectively break down protein in dead skin cells. Papain (from papaya), bromelain (from pineapple), and ficain (from fig tree) are highly specific in their action and target only unwanted cells, leaving healthy surrounding tissues untouched. This property makes them suitable for sensitive skin and conditions like hyperpigmentation, acne, and rough scars. Beyond exfoliation, enzymes are finding their application.
Diacylglycerol acetyltransferase-1 enzyme increases the effectiveness of retinoic acid and improves the skin’s appearance. It has Vitamin A that promotes the skin’s normal process of generating new skin cells and shedding old ones. This lessens the wrinkles and helps to smooth the skin. By enhancing the effect of retinoic acid DGAT-1 helps make skin appear younger and fresher and makes it beneficial in skin care products.
Today several enzymes are used in skin care products and offer various benefits like deep cleansing, anti-aging, exfoliation, moisturizing, and antioxidant effects.
“In all things of nature there is something of the marvelous” – Aristotle
Enzymes Lysyl hydroxylase and Prolyl hydroxylase help produce collagen, a protein that maintains the skin’s strength and firmness. Collagen supports the skin elasticity and firmness by acting as a scaffold. These two enzymes contribute to developing strong collagen by adding special chemical groups to the amino acids lysine and proline in the collagen structure. This makes the collagen fibers solid and prevents wrinkles and skin drooping. These enzymes work together with Vitamin C and enable them to function properly. When Vitamin C levels are low the body produces less collagen which weakens the skin and increases the chance of wrinkles [2].
Harnessing the power of fruits, honey, and botanicals for a radiant complexion
That’s why many anti-aging products include Vitamin C and ingredients that boost collagen to make skin active and fresh. The peroxidases include Horseradish peroxidase and Lactoperoxidase are other classes of enzymes crucial to the cosmetics industry. They employ to preserve the freshness of skincare products although they have no direct effect on the skin. Bacteria cannot survive in the product where the peroxidases have consumed the oxygen.
These enzymes keep the chemicals in skincare products from degrading and losing their efficacy without oxygen. By doing this the products last longer without using artificial preservatives which many consumers want to avoid. Peroxidases are particularly helpful in natural skincare products since they offer a natural means to preserve and ensure the product’s continued safety and efficacy.
Additionally, they help to prevent the oxidation of sensitive ingredients like vitamins and antioxidants extending the shelf life and ensuring optimal performance. The application of these enzymes reflects the continued trend to use natural and biologically active chemicals to improve product performance and satisfy customers for safe, eco-friendly, and scientifically advanced skin care products [3].
The potential for these bioactive compounds is expected to expand as research into enzymatic function advances possibly offering even more precise and potent skincare solutions.
“Nature gives you the face you have at twenty, it is up to you to merit the face you have at fifty” – Coco Chanel
REFERENCES:
Gonçalves S. Use of enzymes in cosmetics: proposed enzymatic peel procedure. Cos Active J. 2021;1:27-33. https://cosmethicallyactive.com/wp-content/uploads/2021/11/Use-of-enzymes-in-cosmetics-proposed-enzymatic-peel-procedure.pdf
Gorres, K. L., & Raines, R. T. (2010). Prolyl 4-hydroxylase. Critical Reviews in Biochemistry and Molecular Biology, 45(2), 106-124
Okereke JN, Udebuani AC, Ezeji EU, Obasi KO, Nnoli MC. Possible Health Implications Associated with Cosmetics: A Review. Sci J Public Health. 2015;3(5):58. https://doi. Org/10.11648/j.sjph.s.2015030501.21.
Jørgensen C. Cosmetics worldwide – same contents? A comparative study. Copenhagen, Denmark: The Danish Consumer Council THINK Chemicals; 2020.
Gonçalves S, Gaivão I. Natural Ingredients Common in the Trás-os-Montes Region (Portugal) for Use in the Cosmetic Industry: A Review about Chemical Composition and Antigenotoxic Properties. Molecules. 2021;26(17):5255. https://doi.org/10.3390/molecules26175255.
Izquierdo-Vega JA, Morales-González JA, SánchezGutiérrez M, Betanzos-Cabrera G, Sosa-Delgado S, SumayaMartínez M, et al. Evidence of Some Natural Products with Antigenotoxic Effects. Part 1: Fruits and Polysaccharides. Nutrients. 2017;9(2):102. https://doi.org/10.3390/
Kede MPV, Sabatovich O. Dermatologia estética. Dermatol Estética 2004. 771–771 p.
Kanitakis J. Anatomy, histology and immunohistochemistry of normal human skin. Eur J Dermatol. 2002;12(4):390–9; quiz 400–1.
Gensler, H., & Magdaleno, S. (2015). DGAT1, Retinoic Acid, and Skin Cell Regeneration. Cosmetic Dermatology Journal.
Note: Dr. Ruqyya Khalid (Assistant Professor, Department of Biochemistry, Kinnaird College for Women, Lahore) is the main author of this article. Fatima Farhan and Moman Mumtaz are co-authors.