18.1 C
Pakistan
Monday, February 23, 2026
Home Blog

Global Energy Markets Power Up with LDES, Emerges as the Backbone of Renewable Revolution

Long-Duration Energy Storage or LDES is rapidly moving out of the labs and into real-world deployments, capturing the attention of governments, tech giants, utilities, and energy markets worldwide. Once a niche concept, LDES is now being touted as the missing link in the transition to 100% clean energy. 

What is LDES, and why does it matter now?

Long Duration Energy Storage (LDES) encompasses the technologies designed to store vast amounts of energy and release it over extended periods, often 8, 12, or even several days, rather than the few hours typical of conventional batteries. LDES takes various forms of energy, including thermal, electrochemical, chemical, and mechanical. 1 These systems tackle one of the greatest challenges of renewable energy and intermittency, the reality that solar panels don’t generate power at night and wind turbines don’t always spin.

Unlike short-duration battery systems, LDES can bridge energy gaps overnight, during cloudy weather, and even during seasonal lows in renewable generation, making it indispensable for highly renewable grids. 

Cutting-Edge Innovations Hit the Spotlight

China’s clean-tech leader HiTHIUM recently unveiled three major LDES breakthroughs at its annual Eco-Day event, including the world’s first 8-hour-native storage solution and a high-capacity 8-hour LDES cell designed for grid and AI data center integration. These innovations aim not just to store energy for long but to deliver it reliably in real-time, even to power-hungry digital infrastructure, not just in China, but around the globe.2 

Policy and Market Momentum

Governments and regulators are accelerating policy actions to support long-duration energy storage (LDES) as part of efforts to bolster grid reliability and integrate higher shares of renewable power. In Southern Australia, officials have opened the first competitive tender under the Firm Energy Reliability Mechanism (FERM), 3 which targets 700 MW of long-duration storage capacity across staggered operational dates between 2028 and 2031.

The technology-neutral tender, which can include batteries and other dispatchable capacity able to deliver sustained output. Forms a cornerstone of the state’s strategy to secure more reliable, affordable electricity and manage the variability of wind and solar generation. The selected projects will be offered 15-year contracts to help underwrite revenues and support project finance, with bid submissions closing in late 2025 and outcomes expected in early 2026.

Meanwhile, in the United Kingdom, energy regulators are developing incentives to attract private investment into LDES at large scale. Ofgem, 4 in coordination with the UK Government’s Department for Energy Security and Net Zero, has introduced a cap-and-floor revenue support regime for qualifying LDES projects, a model that sets minimum guaranteed revenues while capping excessive returns to protect consumers.

LDES
UK regulator Ofgem considers a 10-hour minimum duration for long-duration energy storage. Photo, Energy Shortage

 

The first application window opened in April 2025, and a multi-criteria assessment framework for selecting projects is being finalized, with formal cap-and-floor awards expected by mid-2026. Early eligibility assessments have already been shortlisted for several large-scale schemes, some totalling gigawatts of potential storage capacity, for further evaluation ahead of final selection.

These moves suggest that broader global recognition of LDES is a crucial infrastructure for energy systems transitioning away from fossil fuels. By underwriting long assets and offering revenue certainty, South Australia’s FERM and the UK’s cap-and-floor mechanism aim to reduce investment risk and catalyse the deployment of storage capable of delivering power for eight hours or more, helping to smooth peak demand and balance intermittent wind and solar output. Such regulatory innovation is seen as essential to unlocking the next wave of clean energy projects and ensuring grid stability as renewable capacity rapidly expands. 

Tech Giants Join the Race!

Technology giants like Google have announced a landmark collaboration with Arizona’s Salt River Project (SRP) to advance non-lithium long-duration energy storage (LDES) technologies, 5 a step forward that could help accelerate the deployment of next-generation grid storage solutions. Under the first-of-its-kind research partnership, Google has committed to funding a portion of the costs for LDES pilot projects deployed on SRP’s electric grid. While also analyzing operational performance data and helping shape research and testing plans for these emerging systems.

The initiative focuses on storage technologies that can deliver power for extended periods far beyond conventional lithium-ion battery durations, which might help utilities better integrate renewable generation and improve grid reliability.

The collaboration builds on SRP’s broader strategy to explore a range of energy storage options that support its sustainability and reliability goals, including its commitment to achieve net-zero carbon emissions by 2050. While Google pursues its ambition of operating its global data centers and offices on 24/7 carbon-free energy and reaching net-zero emissions across its operations and value chain.

Both partners have highlighted the importance of LDES in stabilizing stressed grids and enabling deeper renewable energy penetration, particularly as utilities seek solutions. It can deliver energy for 10 hours or more. SRP has previously issued requests for proposals for non-lithium LDES demonstration projects, and this collaboration with Google could help bring multiple such projects closer to commercial reality.

Global Projects Demonstrate Real-World Impact

From a global perspective, across the continents, long-duration energy storage is beginning to translate from concept to concrete investment. Hydrostor, a Canada-based energy storage developer, recently secured US$55 million in fresh funding to advance its 200-megawatt compressed-air LDES project in Australia, marking a significant step in scaling storage technologies that run beyond traditional batteries.

The funding also follows other developments in long-duration storage solutions, like Form Energy, which began construction of its first iron-air battery facility in West Virginia in 2024, while Energy Vault commissioned several gravity storage systems globally through 2023. 6

At the same time, momentum is building in Africa too, where LDES is emerging as a practical solution to chronic power shortages. In Nigeria, an initial 8 MWh long-duration storage project is being deployed to strengthen the grid reliability and cut reliance on costly, polluting diesel generators. Together, these developments highlight how LDES is no longer confined to pilot labs in advanced economies but is increasingly being tailored to meet diverse energy challenges, from stabilizing renewable-heavy grids in Australia to delivering dependable power in emerging markets.

LDES, Growing Climate and Economic Imperative

Industry experts argue that LDES is far more than a technical solution for balancing power grids. Conversely, it is emerging as a strategic, economic, and climate instrument with a wide-ranging impact. By storing clean electricity for hours or even days. LDES enables deeper penetration of renewables, reduces dependency on fossil-fuel peaker plants, and shields energy systems from price volatility and supply shocks.

At the same time, it opens new investment pathways, supports industrial growth, and helps countries meet climate targets while keeping power reliable and affordable for every sector, positioning LDES as a cornerstone of the global energy transition rather than just another piece of infrastructure.

What’s next?

With investment momentum building, new storage technologies nearing commercial maturity, and governments crafting clearer, long-term revenue frameworks, industry observers increasingly see 2026 as a pivotal moment for long-duration energy storage. If these trends converge as expected, LDES could finally bridge the gap between intermittent renewable generation and round-the-clock increasing energy demand, turning wind and solar into dependable, always-available power.

More than a technical milestone, this shift would redefine how electricity systems are planned and operated, whilst providing the backbone for resilient, affordable, and decarbonized grids and signaling a decisive step away from fossil-fuel dependence toward a more secure energy future.

References:

  1. Long Duration Energy Storage Council (LDES Council)
  2. HiTHIUM Introduces Innovative Long-Duration Energy Storage Solutions at 2025 Eco-Day – Third News
  3. AER Submission – Firm Energy Reliability Mechanism, 20 December 2024.pdf
  4. Long-duration electricity storage | Ofgem
  5. We are shaping the future of long-duration energy storage technologies through a new partnership in Arizona.
  6. Hydrostor nets $200M for its long-duration energy storage ambitions – Energy Storage

More from the author: Marine Animals Die From Much Smaller Plastic Doses Than Previously Believed

Also, read: The Role of Finance in the Clean Energy Shift: An Interview with Muhammad Ali Qamar

The Role of Finance in the Clean Energy Shift: An Interview with Muhammad Ali Qamar

The relation between renewable energy systems and financial markets has become more critical than ever, as the global energy landscape undergoes a profound transformation. Today, investment decisions are no longer driven solely by technological feasibility but by robust economic modelling, data transparency, and policy certainty. For countries navigating energy transitions at different stages of development, understanding how markets price risk, opportunity, and sustainability is central to achieving long-term energy security and climate resilience.

In this edition of Scientia, we discover these dynamics with Muhammad Ali Qamar, a doctoral researcher whose work sits at the intersection of energy systems engineering, techno-economic analysis, and data-driven energy policy. With over eight years of experience in assessing the financial and technical viability of clean energy technologies, he brings a nuanced perspective on how analytical tools, market design, and policy frameworks shape investment flows into renewable energy. His research spans bioenergy systems, energy market analysis, and resource assessment, offering valuable insights into how data can reduce investment risk and inform strategic decision-making.

Qamar, Muhammad Ali - Energy and Efficiency Institute
Muhammad Ali Qamar, a doctoral researcher at the University of California (Picture: M. Ali Qamar)

This conversation explores how financial markets influence the pace of renewable energy adoption, the role of techno-economic analysis in bridging the gap between innovation and investment, and the contrasting challenges faced by the Global North and Global South in financing clean energy transitions.

Hifz: Could you share your journey into the energy sector and discuss how your academic and professional experience have shaped your understanding of the role that renewable energy plays globally?

Ali Qamar: For me, it began with the profound realization that energy underlies everything in our lives, from the biochemical energy to the various forms of energy; they interact to make the world around us. During my undergraduate engineering degree, my university introduced a new program in energy systems. That sparked my curiosity, and I started taking elective courses in energy systems, with a growing interest in clean energy. In many ways, renewable energy has grown alongside my generation. 

Over the first two decades, it moved from limited adoption to widespread use. I was very enthusiastic about this trajectory and wanted to pursue it further through a master’s degree in energy systems. Since then, my academic and professional journey has focused on understanding the challenges of clean energy adoption and finding practical ways to address them. I have worked in two very different energy markets in Pakistan and the U.S. I see the nature of the challenges as more similar in terms of market adoption of clean energy.

Hifz: How do you see renewable energy sources reshaping traditional energy markets, and what are the key techno-economic challenges for valuing and integrating intermittent generation, like solar and wind, into those markets?

Ali Qamar: Renewables like wind and solar have demonstrated their massive potential to disrupt both centralized and decentralized power systems. With increasingly favorable techno-economics enabling market adoption worldwide, there is an increasing need to address pertinent challenges associated with them. Their intermittence poses profound challenges, making them unreliable for ramping electricity profiles like in California. California has excess renewable electricity during the day that it needs to curtail. This curtailment often takes the form of the systems operator paying neighboring states to buy excess solar energy to maintain grid stability. 

As the sun goes down, electricity demand goes up, leaving the grid counting on natural gas plants. Energy storage is the most straightforward engineering solution to this problem, but utility-scale batteries are not yet economical at the scale required. Other innovative solutions, like behind-the-meter storage, vehicle-to-grid, green hydrogen production, and pumped hydroelectricity, have their own techno-economic challenges to overcome before becoming viable. 

I believe that transportation electrification, if adopted globally, has the potential to significantly reduce battery costs. This can enable a market-driven uptake of behind-the-meter batteries like solar energy systems. The utilities, grids, aggregators, and policymakers will have to work together to then integrate this resource for grid stabilization.

windmills on grass field at daytime - clean energy
Renewable energy has the potential to disrupt both centralized and decentralized power systems. (Picture: Zbynek Burival)

Hifz: What role do financial instruments such as green bonds, power purchase agreements, or carbon pricing play in accelerating renewable energy deployment, and how important is access to capital and favorable financing structures for scaling projects, especially in emerging markets?

Ali Qamar: Financial tools like green bonds, power purchase agreements, and carbon pricing act as both incentives and pressure points for policymakers to steer markets in certain directions. Their effectiveness, however, varies widely depending on the market. But the effectiveness of bottom-up carrots and sticks is undeniable. While top-down policies can work in some contexts, bottom-up forces are often more powerful. 

For example, Pakistan’s recent solar boom was driven largely by unreliable grid electricity and attractive pricing for solar solutions. This happened without mandates, climate pledges, tax credits, or subsidies. I do not dismiss the use of these instruments, but disagree with a one-size-fits-all implementation that we often see in the emerging markets. Their use must be grounded in the needs of the markets and should be highly sensitive to the interests of all stakeholders. That requires rigorous data collection, careful analysis, and meaningful engagement with market participants to ensure that the policies actually align with real-world conditions. 

Hifz: Given your strong background in data analysis, how can advanced analytics and predictive models improve investment decisions in renewable energy, and what are the main data challenges investors face when evaluating these projects?

Ali Qamar: I touched on this in my previous response, but data analysis plays a critical role in effective investment and policymaking. The biggest data challenges are both quantitative and qualitative. In emerging markets like Pakistan, these challenges are particularly pronounced due to the limited availability of high-quality, reliable government data on energy systems. This has been improving with the institutional reforms in the last decade, so I expect the future analyses to be more robust. 

For investors, the reliability and predictability of the system’s data are key. For example, if I were to invest in a company, I would always prioritize investing in a company that regularly publishes its quarterly financials on time rather than a company that randomly misses a quarter or two or has missing data in its regular reports. Therefore, it is important for the energy systems and the market to have a reliable availability of quality data to enable investors to fully model their investments before committing. This was one of the reasons why the government had to offer guarantees when Pakistan sought independent power producers in the mid-2010s, which has borne bitter fruit to this day. 

Hifz: How do policy frameworks, including subsidies, feed-in tariffs, or carbon pricing, shape financial flows to renewables, and what regulatory priorities would you recommend to balance investor confidence with energy market stability?

Ali Qamar: The mechanisms vary from market to market, and their effectiveness varies accordingly. In Pakistan, the issue is not the mechanism or subsidies; it is the inconsistency. Historically, the existence of incentives or subsidies today has not proven to be a guarantee for tomorrow. This has majorly eroded investor confidence. I would emphasize the need for predictability and reliability of the market as these are the necessary foundations for an investor-friendly environment. Without these, all these tools (no matter how good they are) are like fancy buildings on loose soil. 

The government should focus on improving predictability over high returns. Once the market has established its reliability in the eyes of investors, the immense potential will naturally attract them to the avenues that offer high returns within the market. If this is too difficult for the government alone, opening the sector to greater competition and letting market forces play a stronger role may be a more sustainable path forward.

Hifz: With the rise of ESG (Environmental, Social, and Governance) investing and technological advancements, what major trends do you see shaping renewable energy finance and financial markets over the next decade? 

Ali Qamar: The significance of ESG considerations is growing worldwide, and emerging markets are no longer insulated from this shift. For example, buyers in Western markets are increasingly demanding cleaner production practices from Pakistan’s textile industry. Given the competitive nature of global textiles, the industry has little choice but to adapt. This creates strong demand for verifiable green credentials, driven by both investors and banks. It also presents a major opportunity for the renewable energy sector.

Through energy efficiency measures and rooftop solar, export-oriented industries can meet these requirements without waiting for the national grid to become cleaner (which might take ages, given the economic challenges of Pakistan).

Hifz: How can developing economies attract more institutional and international capital for renewable energy deployment while ensuring local economic benefits and long-term market sustainability?

Ali Qamar: This is literally the billion-dollar question that governments across the developing world are trying to answer. It’s also crucial to understand what NOT to do. Offering guaranteed dollar returns to foreign investors places a heavy burden on already strained public finances and is not a sustainable solution. Regulators should focus less on eliminating risk from investor profits and more on reducing risk within projects themselves. This includes simplifying regulations, improving market predictability, and regularly publishing reliable market data. 

These steps allow investors to assess risk independently and execute projects efficiently, without relying on government guarantees. To ensure local benefits and market sustainability, the focus should be on encouraging investors to capitalize on the local services ecosystem as much as possible. There are various policy tools that the regulators can use to enable this.   

Hifz: Given Pakistan’s rapid growth in solar capacity alongside persistent structural and financial challenges in the power sector, is it possible for Pakistan’s energy sector to achieve a more sustainable and economically viable energy mix? 

Ali Qamar: It is certainly possible, but a lot of challenges stand in the way. One major issue is that the utilities’ best-paying customers have moved to rooftop solar in response to high electricity prices. Instead of viewing residential solar as a threat, regulators should treat it as an asset. This can be done by modernizing the grid and introducing aggregators, virtual power plants, load management programs, and incentives for energy storage. 

Further development of wind energy (both onshore and offshore) should also be explored, as wind complements solar well in certain regions. As storage costs decline, another market-driven expansion could help improve grid reliability and maximize the use of solar power. Until then, load management programs can play an important role in making better use of the clean electricity already available.

More by the author: Palestine’s Hope in Science: Insights from Neuroscientist Abdulrahman Abou Dahesh

The AI Nuclear Renaissance: How Diablo Canyon Became the Backbone of California’s Power Boom

I have lived in San Luis Obispo for over a decade, about 40 km from Diablo Canyon, California’s last operational nuclear power plant. I also work in machine learning and artificial intelligence.  This combination makes Diablo Canyon feel less like an abstract policy debate and more like a high-stakes reality unfolding in my own backyard.

Under a 2016 agreement, the Diablo nuclear plant was scheduled to shut down completely by 2025. It was deemed too expensive and too harmful to marine life. Then, California did one of the most significant policy U-turns of recent times. Diablo went from a facility slated for closure to one considered essential for a reliable grid in the AI era.

The Diablo nuclear plant is now expected to continue operations for another 20 years, with the U.S. Nuclear Regulatory Commission (NRC) set to finalize its license renewal in April 2026.

Diablo Canyon: capacity, concern, and the coastline

The Diablo Canyon nuclear power plant became operational in 1985. It is operated by Pacific Gas and Electric (PG&E). Diablo is an approximately 2.2 Gigawatt (GW) plant, producing 18,000 Gigawatt-hour (GWh) of electricity per year, enough to power 3 million homes in California. Nuclear power provides approximately 9-10% of California’s annual electricity generation.

Californians have debated the risks of Diablo for decades. They involve concerns for accidents, environmental impact, and California’s geological realities. California, in general, is highly susceptible to earthquakes because of 500+ active fault lines. Multiple fault lines surround the Diablo plant. Given that the plant has been in operation for 40 years, there are concerns about aging infrastructure and the steel reactor walls becoming brittle due to radiation exposure (neutron embrittlement). 

The biggest reason for the shutdown agreement, however, was environmental. The plant uses a “Once-Through Cooling” (OTC) system, pulling in 2-2.5 billion gallons of Pacific seawater per day to cool its reactors, killing an estimated 1.5 billion larval fish and marine organisms a year. In response, California’s State Water Board had passed the OTC Policy, requiring power plants to either switch to closed-cycle cooling towers or shut down. PG&E decided it was cheaper to close the plant in 2025 than to build the towers. 

The policy priorities of a decade ago, however, have been reshaped by new technical realities. Extreme heat, a strained grid, and above all, the explosive growth of AI changed Diablo Canyon from an environmental liability into strategic infrastructure almost overnight.

The AI power shock: understanding nuclear energy

An obvious question that comes to mind is, if data centers need energy, why not just plug into the existing electric grid? This is not feasible for 3 main reasons: time, capacity, and reliability. 

Time: If a tech company wants to connect a new 1GW data center into the existing grid, it needs to join an interconnection queue, where the wait times are anywhere from 4 to 12 years. Companies can bypass the queue by connecting directly to a nuclear power plant.

Capacity: The US grid was designed for an era of flat electricity demand, where efficiency gains largely offset new usage. Things changed drastically with the rise of AI data centers, which require massive, multi-GW loads that run 24 hours a day. PG&E estimates California data centers may need another 10GW by 2035. In California, without Diablo Canyon’s 2.2 GW of steady output, supporting the growing AI clusters in Silicon Valley and the Central Valley would be far more difficult.

Reliability: The electric grid is vulnerable to extreme weather conditions, congestion, and blackouts. It is also a shared resource. A data center, however, requires a steady, reliable source of power 24/7. Nuclear plants can provide such dependable baseload power by having the data centers connect directly to the plant’s output. As of late 2025, the Federal Energy Regulatory Commission (FERC) has started creating new fast-track rules to allow data centers to co-locate with on-site nuclear power. 

Another important question: Is nuclear energy “green”? Short answer: Yes and No!

Nuclear energy is “carbon-free”, yes, as a nuclear power plant emits zero CO2 during operation. However, it is not renewable, as unlike sun or wind, uranium is a finite resource whose mining is not cheap, with dangers of radioactive contamination.

Nuclear energy does cause pollution as the power plants create radioactive waste. In the United States, there is still no permanent place to put this waste. So it just sits in dry casks (giant concrete silos) on the coast at places like Diablo Canyon. Also, plants that use the OTC technique for reactors, like Diablo, cause thermal pollution by pulling in billions of gallons of cold ocean water and pumping it back out much warmer. This kills kelp, drives away native fish, and kills billions of tiny fish larvae, eggs, and plankton, effectively sterilizing a portion of the coastal waters every day.

It is important to point out that nuclear energy generally costs more to produce than typical wholesale rates. The cost is primarily shouldered by the tech companies that are willing to pay a premium today for guaranteed future capacity for their data centers. The federal and state programs also offer direct loans and tax credits to minimize the retail price. The impact on regular consumers, however, remains the most debated topic in energy economics right now.

The Diablo decade of change (2016-2026)

In 2016, PG&E reached a landmark agreement with environmental groups and labor unions to shut the plant down entirely by 2025 for reasons already discussed above.

In 2022-2023, extreme weather events kicked in with record-breaking heatwaves and the threat of rolling blackouts. The energy crisis forced California officials to reconsider losing Diablo, its largest steady power source. 

The great reversal started at the end of 2022, when a new bill was signed, which authorized keeping the plant open until at least 2030 while options were considered. It also provided state loan funds for continued operations.

In 2023, NRC further permitted a rare extension of operations past its original license expiration without requiring the immediate installation of cooling towers or a new environmental review of the marine impact. 

The 2024 AI boom further changed the math for Diablo Canyon. AI data centers require massive amounts of dependable 24/7 power, which the current grid cannot provide. 

In 2025, California regulators acknowledged that the state’s plan to add multi-GW of AI data center load is impossible without keeping Diablo Canyon online for much longer. In late 2025, the NRC began processing a request for a full 20-year extension, which would keep the plant running until 2045.

The shift toward keeping Diablo Canyon open was not rhetorical. It showed up in formal decisions across nearly every major regulatory agency. Positions that once treated the plant as obsolete and regulatory frameworks originally designed to protect the coastline have been recalibrated or deferred by the state government in the last three years to accommodate the needs of the AI economy.

Diablo

Managing the risks!

None of the original concerns about Diablo Canyon vanished.

The plant sits near fault lines in an earthquake-prone state. Its OTC system still affects marine ecosystems. Nuclear waste still lacks a permanent disposal site in the United States. Here is how some of these risks are addressed to secure the 20-year extension:

Earthquakes: The NRC’s June 2025 Safety Evaluation found the plant safe, noting it was upgraded to withstand a 7.5 magnitude quake. PG&E also conducts continuous real-time seismic monitoring.

Thermal Pollution: To offset the larval fish deaths, the California Coastal Commission forced PG&E in December 2025 to permanently conserve 4,500 acres of coastline, preventing any future development. 

Embrittlement: The plant is undergoing a massive License Renewal Application audit by NRC, which includes ultrasonic testing of the reactor vessels by PG&E to prove the steel is still flexible and safe.

What risks we are willing to accept today for the needs of the AI economy tomorrow is an ongoing debate.

Diablo

Diablo and AI: a symbiotic relationship

Diablo today is part of an unusual loop. 

In December 2025, Diablo Canyon made history as the first U.S. nuclear plant to deploy an on-site generative AI solution by the startup Atomic Canyon. The startup spent nearly a year training its AI models to understand specific nuclear industry terminology. The resulting AI tool, called Neutron Enterprise, helps engineers navigate and search through millions of pages of technical manuals, 40 years of maintenance logs, and NRC regulations in seconds to speed up maintenance and safety audits. 

So, while the plant is set to power the world’s AI, the AI is helping keep this 40-year-old plant safe and running!

It is a strange modern symbiosis that creates a situation full of complex trade-offs but no easy answers.

As a resident, I understand the unease of living near an aging nuclear plant on an earthquake-prone coast. I deeply love the California coastline, and I see the real environmental cost where the plant’s cooling system harms its fragile marine life. And as an AI engineer, I see an energy system under immense strain to deliver the stable power that the AI age demands.

What I do know is that Diablo Canyon is no longer a relic of the past. It has become a high-stakes experiment for our digital future, balancing trade-offs, humming day and night just up the coast.

Disclosure: This post utilized Gemini 3 (Feb 2026) for data synthesis and research support. The final content reflects the author’s own verification and editorial judgment.

References:

  • The County Office of Emergency Services (OES) Nuclear Incidents 2026 Emergency Planninglink (accessed Feb 5, 2026)
  • Biden-Harris Administration Finalizes Award of $1.1 Billion in Credits to Pacific Gas and Electric’s Diablo Canyon Power Plant, Dept of Energy Grid Deployment Office, January 17, 2024 – link (accessed Feb 5, 2026)
  • International Energy Agency (IEA) Electricity Mid-Year Update 2025 – link (accessed Feb 5, 2026)
  • Groups appeal NRC’s decision to keep Diablo Canyon open: ‘Simply unacceptable’, The Tribune, July 12, 2023 – link (accessed Feb 6, 2026)
  • Nuclear Regulatory Commission (NRC) Safety Evaluation Related to the License Renewal of Diablo Canyon Nuclear Power Plant, Units 1 and 2, issued on June 5, 2025 – link (accessed Feb 6, 2026)
  • California Coastal Commission (CCC) December 2025 Hearing Summarylink (accessed Feb 6, 2026 )
  • California Energy Commission (CEC) 2025 Operations Assessment Report link (accessed Feb 8, 2026)
  • State Water Board: Permitting Actions for Diablo Canyon Nuclear Power Plant Information Sheet, October 21, 2025 – link (accessed Feb 8, 2026)
  • PG&E Presentation and Complete Earnings Exhibits Q2 2025 – link (accessed Feb 8, 2026)
  • PG&E Launches First Commercial Deployment of On-Site Generative AI Solution for the Nuclear Energy Sector at Diablo Canyon, Nov 13, 2024 – link (accessed Feb 6, 2026)

Read about the author: Talking Data in Healthcare and Opportunities for Women with Dr. Bushra Anjum

AI and Machines: There is No such thing as Conscious Artificial Intelligence!

0

“There is no such thing as conscious artificial intelligence”! It was stated in a study published in Nature in October 2025. The authors Anderzj Porebski and Jakub Figura further argued that the association between consciousness and the computer algorithms used today (primarily large language models, LLMs) and those that would be invented in the foreseeable future is deeply flawed. We believe that these flawed associations arise from a lack of technical knowledge and the way several new technologies (especially LLMs) work, which can create the illusion of consciousness.

The modern world attributes considerable credit to Artificial Intelligence (AI) for transforming lives, owing to its wide range of applications and benefits. AI pertains to the design of machines that require human intelligence. The capability of AI to think like humans has been a long-established debate in philosophy, cognitive science, and computer science. The discussion has intensified with the advancements in AI, which demonstrate capacities in reasoning, problem-solving, and creativity. The main question is, can machines be enabled to attain human-like consciousness? Are their intelligence parameters fundamentally different?

Machines
Machine intelligence is the last invention that humanity will ever need to make -Nick Bostrom. (Credit: Possessed Photography/Unsplash)

AI encompasses a broad range of technologies, including machine learning, natural language processing, robotics, and cognitive computing. AI is divided into two main categories, Narrow AI (Weak AI) and General AI (Strong AI). Narrow AI is meant for specific tasks only, like image recognition, translation, or playing chess. It does not have any general intelligence or self-awareness. On the other hand, Strong AI is enabled to carry out any intellectual task like humans, with reasoning, learning, and adapting across various domains. It is often linked to self-awareness and consciousness. Currently, all AI systems fall under the class of Narrow AI, exhibiting excellence in specialized tasks but lacking true understanding.

For a decade, we have noticed a remarkable advancement in AI. This has sparked interest in time-hallowed questions about AI. One question is about the ability of AI systems to be conscious. Consciousness is one of the most complex and debated topics in neuroscience and philosophy. It generally refers to the subjective experience of awareness, thoughts, emotions, and perceptions. Consciousness has several key attributes like self-awareness, intentionality, subjectivity, and qualia. One of the principal theoretical questions is whether consciousness is solely a consequence of brain processes or if it requires something beyond computation.

There is a need to discuss artificial consciousness and the missing element of consciousness in current robots and AI systems. Consciousness is a crucial, yet often overlooked, topic in modern debates on AI and robotics ethics. We see constant integration of machines into our lives without much discussion on whether they are, or have a chance to become, conscious. The machines are becoming more social and lifelike, and so is the need to critically reflect on the role consciousness has in moral and legal considerations.

Phenomenal consciousness is based on subjective experience, and access consciousness deals with information obtainable for reasoning and behaviour; there is more focus on it while discussing artificial intelligence. Drew McDermott presented a computational theory of consciousness, whereby a machine becomes conscious if it is modeled for experiencing things.

There are a couple of arguments on the subject of developing consciousness in AI systems. The human brain has far more complex architecture, biochemical diversity, and developmental trajectory than current AI systems, as human consciousness emerged through complex, multi-level development. AI exhibits access consciousness, the capacity to broadcast and utilize information, but lacks phenomenal consciousness, which is based on subjective experience.

Machines
AI exhibits access consciousness, the capacity to broadcast and utilize information, but lacks phenomenal consciousness. (Credit: Igor Omilaev/Unsplash)

The elements that are of prime importance in human-like consciousness are lacking in AI systems, such as emotions, embodiment, cultural development, and internal motivation. Spontaneous neural activity, healthy protein-based biochemistry and pharmacological modulation, evolutionary and developmental plasticity, embodiment, emotions, and evaluative capability are important brain features that are missing in AI and are difficult to incorporate.

Partial or alternative forms of consciousness, not necessarily a replica of human consciousness, might be attained. They may not be tagged as inferior or superior, but only that they may be qualitatively different. The type and level of consciousness aimed to be developed should be specified by AI researchers, with empirical biology as the basis of the work, not just abstract theory.

People tend to develop emotional bonds with social robots, and such interactions bring questions to light about whether robots deserve rights or moral status, as Sophia, a robot, was granted citizenship in Saudi Arabia. Modern robots still lack consciousness and sentience, so granting them human-like rights isn’t justified.

Machines
A robot woman, Sophia, was granted citizenship in Riyadh, Saudi Arabia. (Credits: Arab News/IIF)

Consciousness in machines doesn’t mean mimicking human behavior. It is a false assumption that only human-like traits indicate consciousness. According to neuroscience, the brain is necessary but not sufficient for consciousness, as some people function with very little brain matter. There are certain criteria for machine consciousness: we must consider consciousness as real, acknowledge that other beings (human or non-human) may have it too, accept the possibility of it arising from physical matter, focus on building machines that support the processes consciousness needs, and ensure that consciousness is observable.

According to some researchers, it’s not necessary to focus on human-like consciousness, as AI is meant for enhancing human life, not replicating human minds. If we consider for a moment that AI systems gain consciousness equal to that of humans, then a couple of genuine concerns arise, such as rights and legal recognition, moral responsibility in case of any crime, and the question of human identity if machines begin to think and feel like humans.

Whereas traditional AI models operate on predefined rules, modern AI mimics neural networks, the inspiration for which is taken from the human brain. However, even the most advanced neural networks lack true understanding, as they cannot comprehend meaning like humans do, although they can recognize patterns.

Philosopher John Searle proposed a “Chinese Room” thought experiment that challenges the idea that AI can accurately comprehend language. He gave an example of a person locked in a room, receiving Chinese characters and responding using a rule book, without understanding Chinese. AI, like the person in the room, only processes symbols but doesn’t understand them. AI may simulate emotions, analyze patterns, and make predictions, but it does not sense emotions as humans do. It is daunting develop systems with gut feelings shaped by life experiences.

Consciousness studies can be conducted scientifically by employing empirical neuroscience. For that purpose, a rubric of indicator properties was proposed, obtained from the main theories of consciousness. Recurrent Processing Theory (RPT), which showed that consciousness requires feedback loops for sensory-perceptual systems. Global Workspace Theory (GWT), which considers consciousness to involve broadcasting information to various cognitive systems, such as focus, memory, and logic. Higher-Order Theories (HOT), which define consciousness as requiring recognition of one’s mental states. Attention Schema Theory (AST), which views consciousness as a model of attention for self-control, and Predictive Processing, which holds that consciousness is based on prediction and error rectification in perception.

Modern AI systems were evaluated for the exhibition of any of these features by adopting a theory-heavy approach, in which theories of consciousness were used to derive testable markers called “indicators.” The design and function of AI systems were compared to these markers for assessment. It was concluded that no current AI system is conscious, although hope remains that it may be possible in the future.

There are certain specific computational and architectural characteristics that AI would require to meet the standards of consciousness, such as algorithmic recurrence, global information broadcast, metacognitive monitoring, predictive modeling of attention, and embodiment and agency. In current AI systems like GPT, some of these indicator properties are present, but not the whole set.

Although current AI lacks consciousness, some theorists have high hopes that future AI may develop self-awareness through advanced neural networks and self-learning algorithms. AI systems might become philosophical zombies, acting as if they are conscious but lacking genuine subjective experience. Machines may develop consciousness through increasing complexity, such as integration with biological neurons or brain-like structures. This approach may also help bridge the gap between computation and true cognition.

The rapid progress of AI cannot be ignored, but achieving human-like consciousness remains an awaited goal. The basic nature and definition of self-awareness and consciousness are not well understood, making it quite challenging to replicate them in machines. The ability of AI to perceive and think like humans is both a technological and philosophical concern, and the questions associated with it will continue to evolve as our understanding of advancements in both AI and human cognition grows.

References
  1. Butlin, Patrick, et al. “Consciousness in artificial intelligence: insights from the science of consciousness.” arXiv preprint arXiv:2308.08708(2023).
  2. Anwar, Nur Aizaan, and Cosmin Badea. “Can a Machine be Conscious? Towards Universal Criteria for Machine Consciousness.” arXiv preprint arXiv:2404.15369(2024).
  3. Farisco, Michele, Kathinka Evers, and Jean-Pierre Changeux. “Is artificial consciousness achievable? Lessons from the human brain.” Neural Networks180 (2024): 106714.
  4. McDermott, Drew. “Artificial intelligence and consciousness.” The Cambridge Handbook of Consciousness (2007): 117-150.
  5. Hildt, Elisabeth. “Artificial intelligence: does consciousness matter?” Frontiers in psychology10 (2019): 1535.
  6. https://www.nature.com/articles/s41599-025-05868-8

More From The Author

Machines    Machines

Scientists Find a Natural SPF with UV-Protective Bacteria in Thailand’s Hot Springs

0

At the western Thailand’s Bo Khlueng hot spring, water temperatures rise close to 70 °C while sunlight reaches the surface with little attenuation. Heat and ultraviolet (UV) radiation are each stressful on their own; together, they create an environment that is hostile to most life. However, some microorganisms survive in these environments through biochemical adaptations that have evolved over long timescales.  Gloeocapsa species BRSZ, a thermophilic cyanobacterium, is one such organism currently attracting scientific interest to produce an unidentified UV-absorbing compound.

The finding is noteworthy as it advances our understanding of microbial survival in harsh conditions. It also guides us in developing  UV protection techniques that are safer for marine ecosystems and people alike. The discovery suggests new avenues for creating UV filters that are more efficient, biodegradable, and less detrimental to marine ecosystems than many of the ingredients found in sunscreens today.

Coping with heat and radiation at once

Researchers from Meijo University in Japan and Chulalongkorn University in Thailand isolated Gloeocapsa BRSZ from the Bo Khlueng hot spring in Ratchaburi Province. The research team was examining the responses of thermophilic cyanobacteria to overlapping stresses, which are increasingly prevalent in a warming world and include high temperature, salinity, and intense solar radiation. This strain produces a UV-absorbing molecule that has never been reported in cyanobacteria, according to chemical analysis.

The newly identified molecule, named β-Glucose-bound Hydroxy Mycosporine-Sarcosine (GlcHMS326), belongs to the family of Mycosporine-like Amino Acids (MAAs). This molecule carries three chemical modifications: glycosylation, hydroxylation, and methylation, which make it stand out among other MAAs. These chemical features are more than structural curiosities. 

They influence how the molecule absorbs UV radiation, how stable it remains under prolonged sunlight, and how it functions in the cells. They suggest a protective system finely tuned by long-term exposure to both heat and UV stress.

How nature shields from UV

MAAs prevent damage to DNA, proteins, or cell membranes by absorbing damaging UV radiation and releasing the energy as harmless heat. They are found in various life forms, such as algae, corals, fungi, and bacteria. MAAs are typically photostable and do not readily break down into reactive or toxic by-products when exposed to sunlight.

Gloeocapsa BRSZ increases  GlcHMS326’s production in the presence of both UV-A and UV-B. Genetic analysis confirms a unique set of biosynthetic genes that is responsible for the chemical modifications. Similar gene clusters appear to be uncommon, suggesting that this biosynthetic pathway may be restricted to a narrow group of heat-adapted cyanobacteria.

Interestingly, although eight thermophilic cyanobacterial strains were isolated from the same hot spring, only Gloeocapsa BRSZ produced substantial amounts of the novel MAA. This highlights how closely related organisms can adopt very different biochemical strategies even when exposed to the same environmental pressures.

UV
It is estimated that roughly 14,000 tons of sunscreen enter marine environments each year, much of it washed off swimmers in coastal waters.  Photo, Takeme tour

Savior of Coral Reefs

Interest in naturally derived UV filters has grown significantly due to the increased concern about the environmental effects of conventional sunscreens. It is estimated that roughly 14,000 tons of sunscreen enter marine environments each year, much of it washed off swimmers in coastal waters. Several widely used chemical filters, including oxybenzone and octinoxate, have been linked to coral bleaching, disrupted larval development, and DNA damage at extremely low concentrations.

Rising ocean temperatures and acidification pose threats to coral reefs. As a result, certain states, including Hawaii, have banned the use of sunscreens with specific chemical UV filters. While these limitations minimize local exposure, they also highlight the importance of good UV protection that does not harm marine ecosystems. MAAs derived from marine algae and cyanobacteria have emerged as promising alternatives.

The emerging data suggest that GlcHMS326 could provide UV protection as a sunscreen and has antioxidant activity that can protect from both UV-associated reactive oxygen species (ROS) and oxidative damage. Therefore, this compound might serve a dual purpose, protecting the skin from the damaging effects of UV radiation while reducing oxidative damage.

Additionally, cyanobacterial photosynthesis is a growing and effective production platform; it can be cultivated with three primary resources: light, water, and carbon dioxide. Some earlier work suggests that MAA biosynthetic genes can be transferred among different microorganisms, suggesting a bio-based, scalable means of producing MAAs.

Questions remain concerning GlcHMS326. Currently, we have no data on how long GlcHMS326 will remain stable (sufficiently long to survive, say, sunscreen formulations) or how long it will last over time when exposed to UV light. Lastly, we do not yet know if it will be economically feasible to produce GlcHMS326 at scale.

Organisms that can Adapt in Extreme Environments

While the discovery of GlcHMS326 does not directly create new sunscreen ingredients, it provides avenues for new understanding of the mechanisms. With these mechanisms, several organisms can stay adaptive over time to survive even in extreme environments.

Bo Khlueng hot springs are only one example of extreme habitats where organisms survive; similar metabolic pathways exist in polar, desert, marine, and deep-ocean environments.

References:

  • Samsri, S., et al. (2025). Discovery of a novel natural sunscreen from thermophilic cyanobacteria with a potentially unique biosynthetic pathway and its transcriptional response to environmental stresses. Science of the Total Environment.
  • https://doi.org/10.1016/j.scitotenv.2025.181006
  • Kageyama, H., & Waditee-Sirisattha, R. (2019). Antioxidative, anti-inflammatory, and anti-aging properties of mycosporine-like amino acids. Marine Drugs, 17(4), 222.
  • https://doi.org/10.3390/md17040222
  • Downs, C. A., et al. (2020). Sunscreen use and awareness of chemical toxicity among beachgoers in Hawaii before a ban on certain ingredients. Marine Policy, 117, 103875.
  • https://doi.org/10.1016/j.marpol.2020.103875

Similar Articles:

The Modern Alchemy at CERN: Turning Lead into Gold is Possible Now!

0

For centuries, alchemists from ancient China, India, and Europe dreamed of transforming base metals like lead into precious gold. This long-standing quest, known as Chrysopoeia. They believed in a mysterious substance called the “philosopher’s stone” that could unlock this secret. While their dreams never came true, modern science has finally achieved what they could only imagine, thanks to the incredible work of scientists at CERN’s Large Hadron Collider (LHC) in Switzerland.

A Medieval Dream Realized – Through Science

This isn’t magic. It’s the realization of an ancient alchemist’s dream through modern nuclear physics. In the 20th century, we learned that heavy nuclei can transmute, either by radioactive decay or by particle bombardment in the lab. Recently, at CERN’s Large Hadron Collider (LHC), scientists ALICE (A Large Ion Collider Experiment) collaboration have observed the transmutation of lead atoms into gold.

However, this transmutation did not come from direct collisions, but through a phenomenon involving near-miss interactions between lead nuclei moving at nearly the speed of light. These near-collisions generate extremely powerful electromagnetic fields that can knock three protons out of a lead atom. Since gold has three fewer protons than lead, this results in the formation of a gold atom, at least for a very short moment (Space.com, 2024).

The Science Behind the Magic

Let’s break it down. An atom of gold has 79 protons, while lead has 82. So, turning lead into gold is essentially a matter of removing three protons. But protons are tightly bound in the nucleus by something called the strong nuclear force, one of nature’s strongest forces. To overcome this force, scientists used the LHC (the world’s largest and highest-energy particle accelerator) to speed up lead nuclei to 99.999993% the speed of light. When these nuclei barely miss each other, rather than crashing head-on, they generate a huge electromagnetic pulse (The Conversation, 2024).

This pulse triggers what’s called “electromagnetic dissociation,” where the atomic nucleus shakes and ejects neutrons and protons. If exactly three protons are removed, the lead atom becomes gold. These interactions are incredibly rare and last for just microseconds, but they are real, measurable, and profoundly significant. (CERN News, 2024).

Before you get excited about getting rich, here’s the reality check: between 2015 and 2018, scientists at CERN produced approximately 86 billion gold nuclei. Sounds like a lot? It only adds up to about 29 picograms, or 29 trillionths of a gram (Journee Mondiale, 2025). That’s so tiny it wouldn’t even be visible, let alone useful for making jewelry.

The production rate was impressive, to 89,000 gold nuclei per second during active experiments, but the atoms broke apart almost instantly after forming. They collided with the LHC’s beam pipe or other components and decayed into other particles (Space.com, 2024).

So why is this important if it doesn’t make us rich?

According to Dr. Elena Markov, a researcher on the ALICE experiment, this is about far more than gold. “It’s a beautiful demonstration of Einstein’s E = mc2 in action, showing how energy and matter can be transformed” (Journee Mondiale, 2025). The findings help scientists understand nuclear stability and reactions, and even how elements form in cosmic events like neutron star collisions.

What’s more, the advanced detection technology used, particularly the zero-degree calorimeters (ZDC) that detect subtle nuclear changes, opens new research pathways potentially beneficial for nuclear medicine, particle physics, and future clean energy sources (CERN News, 2024).

Interestingly, this isn’t the first time humans have made gold from lead. In the 1970s, Nuclear chemist and Nobel laureate Glenn Seaborg and his team at Lawrence Berkeley National Laboratory achieved lead-to-gold conversion using a powerful particle accelerator. While the result was groundbreaking at the time, the method was extremely expensive. A senator even criticized it for wasting taxpayer money. 

Even earlier, in 1937, physicist and Nobel laureate Ed McMillan created the first artificial isotopes of gold using early particle accelerators known as cyclotrons. Since then, nuclear transmutation has become routine in laboratories worldwide. Today, nuclear scientists regularly create elements and isotopes previously unseen in nature, contributing significantly to our understanding of atomic structure and fundamental physics. (Discover magazine 2024)

CERN
Picture of the ALICE detector. Photo, CERN

From Myth to Measurement

The success of this experiment at CERN beautifully shows how ancient curiosities still inspire modern science. Alchemists, despite their mistaken theories and mythical approaches, were right to ask fundamental questions about matter. Today, with powerful machines and brilliant minds, scientists have not only proven that transmutation is possible but have also expanded humanity’s understanding of nature at its most fundamental level.

Scientists emphasize that the true goal of modern nuclear physics is not the production of gold but rather achieving gold-standard knowledge. The tiny amounts of gold produced in the LHC experiments symbolize something far greater: the extraordinary power of science to transform our understanding of the universe itself.

These advancements in nuclear transmutation could influence numerous scientific fields. As nuclear physics progresses, understanding these elemental transformations might inspire innovative approaches in medicine, such as targeted radiotherapy utilizing gold nanoparticles, or even in developing new materials and clean energy technologies.

Moreover, understanding nuclear processes at a deeper level helps predict and manage challenges in future particle accelerators. Insights from these experiments inform scientists about beam stability, energy losses, and potential enhancements to collider performance, guiding future technological advancements for exploring the tiny building blocks of the universe.

The CERN discovery bridges ancient alchemical dreams with modern science. While medieval alchemists tried to make gold for wealth and immortality. Today, scientists are not after wealth; they want to understand how the universe works. The tiny gold atoms created at CERN may be insignificant as treasure, but as scientific milestones, they’re invaluable.

The transformation of lead into gold at CERN is thus symbolic of a broader human quest: understanding the universe’s deepest secrets. The true wealth lies not in the tiny amounts of gold produced but in the immeasurable knowledge that emerges from pushing the boundaries of science.

Reference:

Similar Articles: LSM 2019: Interview with CERN’s Dr. Joao Antunes Pequenao

Adélie Penguin Weird Behaviors from Climate Change

1

Clips of an Adélie penguin, often refered “Nihist Penguin” from a video, are gaining significant traction online. Filmmaker Werner Herzog took to Instagram to share the context behind a scene from his 2007 documentary, “Encounters at the End of the World”. This particular scene has reemerged as the popular “lonely penguin or nihilist penguin meme circulating widely across social media.

Herzog offered insights into how the scene was captured and why it continues to resonate with audiences. The viral trend centers on an image of a penguin wandering away from its colony towards Antarctica‘s desolate interior. Users have widely shared this clip across various platforms, pairing it with captions that explore themes like isolation, existential reflection, and detachment.

The original footage shows a single Adélie penguin breaking away from its group and wierdly heads inland, instead of remaining along the Antarctic coastline. Penguins are usually found in large colonies, and they travel in groups. Humans never disturb them while living or traveling in Antarctica.

Surprisingly, this disoriented penguin appeared at New Harbor, around 80 kilometres away from where it should have been. He was heading deep into the continent’s interior, with nearly 5,000 kilometres ahead, a journey that would most certainly end in his death.

The filmmaker Werner Herzog had spoken to scientists who study penguins’ unusual behaviours. He actually drew inspiration from the ominous tone of the crime television series ‘Unsolved Mysteries’.

penguin
The viral footage shows a rare behavior where a penguin stops trying to survive.

Why do they behave weirdly? 

The viral footage shows a rare behavior where a penguin stops trying to survive. Dr. David Ainley, a scientist in the the docuemntary, explained that even if the penguin is brought back to the water, it will immediately turn around and walk back toward the mountains. This is scientifically called “death march behavior” among adelie penguins.

According to David Ainley, the penguin was disoriented and was facing a neural error. Since penguins rely on sun and magnetic field cues to navigate, a “biological error” or disruption in their internal compass can cause them to become disoriented, forcing them to misinterpret the barren inland as the sea. Changes in sea ice, such as deep cracks, can disorient penguins and force them to make unnatural decisions. 

Behavioral responses of Adélie Penguin

A research was published in Science Direct in September 2022, in which the researchers from the Korea Polar Research Institute analysed the weird behaviours of Adélie penguins while confronting a giant ice floe.

Climate change is contributing to the more extreme events worldwide, due to which animals are facing rapid and extreme changes in their natural habitats. Adélie penguins are generally a sea ice-dependent diving seabird, and have been an important study species for investigating the effects of ice conditions on ecological responses in Antarctica.

Penguins are categorized as krill-dependent species, which are animals that rely on Antarctic krill as a primary food source for their survival, reproduction, and growth within the Southern Ocean Ecosystem. These species rely on a specific, concentrated food source that they are highly vulnerable to fluctuations in krill populations caused by climate change, such as sea-ice loss or commercial fishing. 

These species are considered key indicator species in the CCAMLR Ecosystem Monitoring Program (CEMP) because significant habitat changes have appeared in response to changes in the sea ice environment. This program was initiated in 1984 to study the long-term changes in arctic marine ecosystem.

According to this study, recent extreme cases of sea ice extents have affected the Adélie penguin population, which suffered from total breeding failures twice in 2014 and 2017 in the Dumont d’Urville Sea.

These cases result in major reproductive failures. It is crucial to understand how extreme sea ice conditions alter the foraging strategies of seabirds. Studies conducted earlier show that the Adélie penguins have a long foraging trip in extensive sea ice, such as fast ice or extreme events like icebergs. They face major problems in food delivery to their chicks. As a result, sea ice showed a complex relationship with the reproductive performance of Adélie penguins.

This research suggests that the giant ice floe could alter the foraging paths, and penguins bypassed or crossed the ice to reach their foraging areas by spending more energy and time.

References: 

Similar articles: How Climate Change is Endangering the Iconic Wildlife Species

Gul Plaza Fire- Key Takeaways for Building Fires and Safety

0

On the night of January 17, 2026, a devastating fire engulfed Gul Plaza shopping center in Karachi. The blaze spread with terrifying speed, ultimately claiming 63 lives and leaving a scar on the city. This tragedy, like many before it, underscores a critical reality: fires in buildings pose a severe threat that can escalate rapidly.

This incident is a tragic case study in rapid-fire spread. The fire, which started in a single shop, quickly consumed the multi-story complex. According to local media, much of the structure has collapsed, and what remains may have to be demolished due to severe structural damage.

Fires can start anywhere, but what structural, systemic failures on the night of January 17 allowed it to spread on such a large scale? The question is on everyone’s mind right now. According to a rescue worker, access to the site was a major challenge on the night of the fire. The road was narrow, and a large crowd gathered around, blocking the entire road.

Another reason for the rapid spread and repeated flare-ups was the materials inside. The plaza was filled with shops selling clothing, plastics, and other highly flammable goods. This provided an abundant fuel source for the fire. The building lacked adequate fire-resistant barriers between shops and floors, allowing the fire, heat, and toxic smoke to travel unimpeded throughout the structure.

Critically, the building was missing essential safety features. Reports indicated there were no functioning smoke alarms, sprinkler systems, or fire hoses. Although extinguishers may have been present, the initial blaze was not controlled. According to the BBC, 13 of the building’s 16 exits were reportedly locked, trapping shoppers and staff inside.

Due to these multiple factors, the firefighters faced significant challenges, including traffic congestion and difficulty accessing the building’s interior. They had to break through walls to create entry points, losing valuable time as the fire raged.

Gul Plaza Karachi fire incident serves as a somber reminder of what can happen when preventive measures are overlooked and safety standards are compromised. This article aims to educate on the common causes of building fires, the importance of prevention, and the essential strategies for a safe escape. Understanding these elements is the first step toward building better fire safety.

What Causes Fires in Buildings?

Buildings are complex environments where numerous hazards can coexist. A small spark can quickly become a deadly inferno due to a combination of factors.

Electrical failure and Human errors

Electrical failures are a leading cause of fires. Faulty wiring, overloaded circuits, short circuits, and outdated electrical systems can generate intense heat, igniting nearby combustible material. Modern buildings are filled with items that burn quickly. Plastics, synthetic fabrics in furniture and clothing, and various chemicals can act as fuel, helping a fire spread rapidly.

Human Error and carelessness are often major contributors, as well. Unattended cooking, improper disposal of cigarettes, misuse of heating appliances, and children playing with matches can all lead to disaster.

Structural Weaknesses and Lack of Maintenance

The very materials used to construct a building can pose a risk. If building materials do not meet fire safety standards, they can fail quickly when exposed to heat, leading to structural collapse.

Neglect is a silent accomplice to fire. Poorly maintained heating systems, clogged vents, and uninspected electrical wiring create hazardous conditions. Crucially, a lack of maintenance on fire safety equipment like alarms and extinguishers renders them useless when needed most.

Fire
The author has generated this photo with AI.

Fire Safety Standards and Building Regulations

To combat these risks, governments and municipalities establish rules to protect occupants. These regulations are the foundation of building fire safety. The primary goal of these standards is to ensure people can escape safely. This involves multiple layers of protection, representing different types of building fire safety

According to experts, every building must have a sufficient number of clearly marked emergency exits that should not be locked or blocked. Moreover, buildings should have active fire protection systems. These systems are designed to detect and control a fire and include smoke detectors, heat alarms, automatic sprinkler systems, and fire extinguishers.

Passive Fire Protection is another essential for buildings, which involves such types of construction materials and designs that resist fire and limit its spread. Fire-resistant doors, walls, and compartmentalization of floors help contain a blaze, buying precious time for evacuation.

The Building Code of Pakistan (Fire Safety Provisions-2016) outlines many of these requirements. However, the effectiveness of any code depends entirely on its enforcement. In many parts of Pakistan, inadequate inspections and a lack of accountability cause such incidents, and numerous buildings remain dangerously non-compliant.

How to Manage Fire Hazards in Buildings

Incidents like the Gull Plaza fire are no longer rare in Pakistan’s major cities, yet preventive measures continue to be overlooked by provincial and federal governments. In contrast, proactive fire management is widely practiced around the world to reduce risk and loss, placing responsibility on both building owners and those who occupy these spaces.

Regular inspections of electrical systems, heating units, and fire safety equipment are non-negotiable. Building owners must ensure proper storage of flammable materials and maintain clear, unobstructed hallways and exits. Investing in building fire safety is an investment in human life. This includes installing fire-resistant doors and windows, upgrading to modern wiring, and fitting comprehensive sprinkler and alarm systems.

Training and drills are also crucial for large and crowded buildings; knowing what to do in a fire is just as important as having the right equipment. Regular fire drills help familiarize occupants with evacuation routes. Training on how to use a fire extinguisher and basic first aid can empower people to respond effectively in the critical first moments.

How to Escape a Fire!

In a fire, panic can be as dangerous as the flames; Knowing the correct procedure to escape can save your life. Staying calm and composed can help control fear. A clear mind makes better decisions. While encountering a fire, alerting others and activating the nearest fire alarm immediately is a mandatory step to stop the spread.

Before opening any door, feel it with the back of your hand. If it is hot, do not open it. Fire is on the other side. Try to find an alternate route. Smoke and toxic gases rise. Stay as low to the floor as possible, where the air is cleaner and cooler, and crawl to the nearest exit.

Never use an elevator during a fire. It can malfunction, lose power, and become a deadly trap. Always use the stairs. If you cannot escape, seal the room. Use tape, towels, or clothing to cover vents and cracks around the door to keep smoke out. Call emergency services and tell them your exact location. Signal for help from a window by waving a flashlight or a brightly colored cloth.

The Role of Rescue Services: What Can Be Improved?

Firefighters and rescue teams are the last line of defense, but they often face immense challenges. As seen in the Gul Plaza fire, limited resources, traffic, and structural instability can severely hamper their efforts. To improve effectiveness, municipalities must:

  • Invest in Resources: There is an urgent need for more fire stations, modern equipment (like high-rise ladders), and protective gear for personnel.
  • Enhance Training: Firefighting is a high-risk profession that requires continuous and rigorous training to handle modern building fires and complex rescue scenarios.
  • Improve Emergency Planning: Coordinated emergency response plans that account for traffic management and access to dense urban areas are essential for reducing response times.

What Needs to Change: Long-Term Solutions

Preventing future tragedies requires a fundamental shift in our approach to building fire safety. Strengthen and Enforce Regulations is a crucial step. Fire safety codes and the Building Code of Pakistan (Fire Safety Provisions-2016) must be strictly enforced with zero tolerance for non-compliance. This requires regular, thorough inspections and meaningful penalties for violations.

Worldwide, public education and rescue training for fire incidents are mandatory, in which citizens are trained for fire risks, prevention techniques, and evacuation procedures. This knowledge empowers individuals to protect themselves and their communities.

A comprehensive review of building standards should be managed in crowded areas like Karachi Saddar. Authorities in high-risk cities must conduct a comprehensive review of existing building safety standards and retroactively apply them to older structures where feasible.

Gul Plaza Karachi fire incident is a painful lesson in the consequences of neglecting buildings’ fire safety. Fires are not just accidents; they are often the predictable outcome of human error, poor maintenance, and regulatory failure. While we cannot eliminate every risk, we have the power to significantly reduce the danger.

The responsibility is shared. Building owners must invest in safety, authorities must enforce the law, and every individual must learn how to prevent fires and what to do if one occurs. By taking fire safety seriously, we can work together to ensure that our homes, workplaces, and public spaces are safe for everyone.

References:

  1. https://www.abc.net.au/news/2026-01-18/pakistan-gul-shopping-centre-fire/106241742
  2. https://cottongds.com/news/what-are-the-top-causes-of-fires-in-commercial-properties
  3. https://www.hec.gov.pk/english/services/universities/Monitoring-Evaluation/Documents/Fire-Safety%20Provisions.pdf
  4. https://www.geo.tv/latest/645891-gul-plaza-inferno-doused-fatalities-rise-to-14-as-rescuers-expand-search-for-missing
  5. https://www.bbc.com/news/articles/c1ev4z4n5dzo
  6. https://www.geo.tv/latest/645891-gul-plaza-inferno-doused-fatalities-rise-to-14-as-rescuers-expand-search-for-missing
  7. Kodur, V., Kumar, P., & Rafi, M. M. (2020). Fire hazard in buildings: review, assessment and strategies for improving fire safety. PSU research review, 4(1), 1-23.
    doi: https://doi.org/10.1108/PRR-12-2018-0033
  8. https://aito.com.my/escape-fire/?srsltid=AfmBOopeon8PByW2c4jEGcyZlhsAW35oWGPyu-aRaIsnU9NkMTgcRUBx

More from the author: Rape Cases in Pakistan: Behind Closed Doors, the Forensic Fight to Find the Truth

 

Building Fires and Safety – Key Takeaways from Gul Plaza Fire | SafetyFirst

Stopping Cancer Before It Starts: A Cellular and Preventive Perspective

1

“Sorry, it’s too late. It has spread.” Those words resonate far beyond a medical report; they carry the weight of grief, regret, and the quiet horror of time lost. Cancer is rarely sudden. It develops silently over the years, often without symptoms, until it reaches advanced stages. Every day a disease remains undiagnosed, every symptom dismissed, every reassurance that delays testing increases the likelihood that treatment may no longer be effective. You hold their hand, wishing for a miracle, but sometimes the opportunity for intervention has already passed. The hardest question is not why, but what if it had been caught earlier?

“Cancer begins long before it is seen, in cells that whisper warnings we often fail to hear.” The silent interval between cellular mutation and the onset of detectable disease represents the period during which prevention has the greatest impact. It is within this period that lifestyle choices, environmental awareness, and medical intervention can truly make the difference between life and loss [1,2].

Cancer as a Multistep Cellular Process

Cancer is fundamentally a disease of cells, arising from a complex, multistep process that unfolds over years. Healthy cells are continuously exposed to a variety of internal and external insults. Reactive oxygen species, generated during normal metabolism, can damage DNA, proteins, and lipids [3]. Errors in replication and the byproducts of chronic inflammation add to this burden, while environmental exposures such as ultraviolet radiation, chemical carcinogens, and viral infections compound cellular stress [4]. 

Under normal circumstances, the body’s DNA repair mechanisms correct most damage, and damaged cells are eliminated through apoptosis. The immune system also plays a crucial role in recognizing and destroying aberrant cells. When these safeguards fail, mutations accumulate in oncogenes, tumor suppressor genes, and DNA repair genes, tipping the balance toward uncontrolled cellular proliferation and malignancy. Genomic instability, epigenetic dysregulation, failure of apoptosis, and evasion of immune surveillance are central hallmarks of this progression [1,2].

“The earliest victories are invisible; they happen at the cellular level, long before symptoms arise.” Understanding these mechanisms allows researchers and clinicians to identify biomarkers and interventions that target the earliest, most reversible stages of carcinogenesis [1,2].

Oxidative Stress and Inflammation

Reactive oxygen species (ROS) play a dual role in biology. While essential in signaling and immune defense, chronic excess levels induce damage to DNA, proteins, and lipids [3]. Over time, this damage contributes to mutagenesis, chromosomal instability, and the initiation of malignant transformation. Oxidative stress arises from both internal and external sources. Internally, metabolic byproducts and chronic inflammation increase ROS production. 

Externally, pollutants, tobacco smoke, poor diet, and certain infections act as additional triggers. The body’s endogenous antioxidant systems, supplemented by nutrients such as vitamins C and E, carotenoids, and polyphenols from a plant-based diet, help neutralize ROS. However, when oxidative stress overwhelms these defenses, cellular injury accumulates [3,6].

Chronic inflammation amplifies this damage, promoting cell proliferation, angiogenesis, and tissue remodeling [3,4]. Persistent inflammatory signaling produces cytokines, growth factors, and enzymes that favor a tumor-supportive microenvironment. Conditions such as obesity, metabolic syndrome, chronic viral infections, autoimmune disorders, and prolonged psychological stress all contribute to a state of chronic inflammation [5,9].

Over time, this environment facilitates DNA damage, impairs apoptosis, and allows abnormal cells to evade immune surveillance. The interplay between oxidative stress and chronic inflammation forms a central axis in early carcinogenesis.

“The body speaks in small signals; ignoring them is a risk no one should take.”

Lifestyle as Cellular Defense

Lifestyle factors profoundly influence cancer risk. Diet plays a critical role, with a plant-rich, whole-food approach providing antioxidants, polyphenols, and micronutrients that neutralize ROS, modulate gene expression, and support DNA repair [6,7]. Fiber-rich foods promote gut health and reduce exposure to carcinogenic metabolites, while healthy fats such as omega-3 fatty acids mitigate systemic inflammation.

Conversely, diets high in processed meats, refined sugars, and saturated fats exacerbate oxidative stress, promote inflammation, and increase carcinogenic pathways. Every meal, every choice, therefore, has the potential to influence cellular resilience.

Physical activity complements these effects. Regular exercise improves insulin sensitivity, reduces excess adiposity, modulates hormone levels, enhances immune surveillance, and lowers systemic inflammation [8]. Even moderate-intensity activity, sustained over time, reduces the risk of colorectal, breast, and endometrial cancers. Consistency, rather than intensity, defines its protective effect.

In parallel, managing psychological stress is crucial. Chronic stress dysregulates the hypothalamic-pituitary-adrenal axis, alters cortisol and catecholamine levels, suppresses cytotoxic immune function, and promotes inflammatory signaling [5,9]. Practices such as mindfulness, meditation, cognitive therapy, and social support act as biological shields, reinforcing the body’s ability to repair and defend.

“Hope is not denial; it is listening to the early signs before silence becomes final.

Environmental Risk Factors

Environmental exposures constitute a significant, often preventable, fraction of cancer risk. Tobacco smoke, both active and passive, remains the leading preventable carcinogen [4]. Excessive alcohol consumption contributes to oxidative stress and mutagenesis, while occupational or environmental exposure to chemicals such as benzene, asbestos, and pesticide compounds the risk [4]. Ionizing radiation, whether from medical imaging or environmental sources, adds potential for further DNA damage. Reducing these exposures through behavioral interventions, public health policies, and occupational safeguards is essential for primary prevention. Awareness and proactive avoidance of such risks can profoundly alter individual and population-level cancer outcomes.

Early Detection: The Life-Saving Window

While lifestyle and environmental measures prevent cancer initiation, early detection serves as a secondary prevention strategy. Screening methods, including mammography, Pap smears, HPV testing, colonoscopy, and genetic testing for high-risk populations, enable identification of pre-malignant or early malignant lesions [2,4]. Emerging technologies, such as liquid biopsies detecting circulating tumor DNA, promise to detect malignancy even before conventional imaging or symptoms appear. Timely detection dramatically improves prognosis, enabling intervention at stages when therapy is most effective and minimally invasive.

cancer
“I lost her to a late diagnosis. Let this be a warning: act early, detect early, save lives.” Photo, Author

Public Health Implications

Individual prevention is amplified through population-level interventions. Tobacco taxation, vaccination programs for HPV and hepatitis B, nutrition education campaigns, and widespread access to screening and genetic counseling all contribute to reducing cancer incidence [2,4]. Policies that facilitate healthy behaviors and early detection complement personal efforts, creating a societal framework where cancer prevention is proactive rather than reactive. Integrating molecular understanding, lifestyle guidance, and public health strategies constitutes a comprehensive approach to cancer prevention.

Final Words!

Stopping cancer before it starts is not merely aspirational; it is biologically plausible and evidence-based. Cancer begins silently, through cumulative DNA damage, oxidative stress, chronic inflammation, and environmental insults. Interventions like diet, exercise, and stress management are useful to avoid environmental carcinogens, and timely screening collectively reinforces cellular resilience. Telomere preservation and molecular biomarker monitoring provide additional layers of protection. Prevention is a cumulative, lifelong commitment, a deliberate choice to create an internal environment that resists malignant transformation.

Every moment we act early is a moment gained, a life preserved, a loss prevented.” Through awareness, informed choices, and early intervention, cancer can be intercepted long before it reaches the point of no return.

“This is for my mother, whom I lost, and for the hope that no one else has to watch a loved one slip away because it was caught too late.”

References:

  1. Hanahan D, Weinberg RA. Hallmarks of cancer: The next generation. Cell. 2011;144(5):646–674.
  2. Thun MJ, DeLancey JO, Center MM, Jemal A, Ward EM. The global burden of cancer: Priorities for prevention. Carcinogenesis. 2010;31(1):100–110.
  3. Reuter S, Gupta SC, Chaturvedi MM, Aggarwal BB. Oxidative stress, inflammation, and cancer: How are they linked? Free Radic Biol Med. 2010;49(11):1603–1616.
  4. Vineis P, Wild CP. Global cancer patterns: Causes and prevention. Lancet. 2014;383(9916):549–557.
  5. Cohen S, Janicki-Deverts D, Miller GE. Psychological stress and disease. JAMA. 2007;298(14):1685–1687.
  6. Anand P, Kunnumakkara AB, Sundaram C, Harikumar KB, Tharakan ST, et al. Cancer is a preventable disease that requires major lifestyle changes. Pharm Res. 2008;25(9):2097–2116.
  7. Key TJ, Schatzkin A, Willett WC, Allen NE. Diet, nutrition, and the prevention of cancer. Public Health Nutr. 2004;7(1A):187–200.
  8. McTiernan A. Mechanisms linking physical activity with cancer. Nat Rev Cancer. 2008;8(3):205–211.
  9. Giovannucci E, et al. Obesity and cancer risk: Epidemiology and mechanisms. Nat Rev Cancer. 2010;10(8):593–607.
  10. Blasco MA. Telomeres and human disease: Ageing, cancer and beyond. Nat Rev Genet. 2005;6(8):611–622.
  11. Shay JW, Wright WE. Telomeres and telomerase in normal and cancer stem cells. FEBS Lett. 2010;584(17):3819–3825.
  12. Aviv A. Telomeres, sex, reactive oxygen species, and human cardiovascular aging. J Mol Med. 2002;80(11):689–695.

More from the author: Humanity in Microgravity: How the ISS Is Transforming Medical Research

JWST observed dozens of Little Red Dots that may be Black holes in early Universe

0

James Webb Space Telescope has discovered “little red dots” (LRDs), compact, infrared-emitting sources in the early Universe, likely representing young supermassive black holes accreting mass near the Eddington limit within dense cocoons of ionized gas.

In a new study published in Nature on January 14, the researchers investigated the identity of little red dots. These mysterious objects from the early universe exhibit characteristics of both galaxies and supermassive black holes, yet do not fit the description of either of them.

JWST first observed these little red dots in 2022, shortly after its launch and began collecting data. Initially, researchers assumed them to be compact, star-filled galaxies; they changed their assumption as the dots were present too early in the universe to have formed so many stars, at least under our current understanding of galactic evolution.

Later on, several other researchers suggested that the unusual objects might be early supermassive black holes. Light emitted by energized hydrogen atoms around the dots shows that the gas is moving at thousands of miles per second, tugged along by the gravitational pull of the object at the center.

Rodrigo Nemmen, an astrophysicist at the University of São Paulo in Brazil, wrote an accompanying article published in the journal Nature. According to him, “Such extreme speeds are a smoking gun of an active galactic nucleus,” meaning a starving supermassive black hole at the center of a galaxy that is pulling in matter.

In comparison to supermassive black holes, these little red dots haven’t been observed emitting X-rays or radio waves. And regardless of whether the dots are black holes or early galaxies, they appear to have a gigantic mass to have formed as early in the universe as they did.

To better understand their nature, researchers studied the spectra emitted from 30 little red dots, each one collected with JWST’s infrared instruments. They found that the light emitted from these LRDs matches the light that the team predicted would be emitted from a supermassive black hole surrounded by a dense cloud of gas. That gaseous cocoon could have trapped X-ray and radio emissions from the growing black holes, blocking them from reaching JWST.

When the team recalculated the masses of these LRDs under the new interpretation, they found that the dots were about 100 times less massive than previously thought. Together, the evidence suggests that LRDs are growing supermassive black holes that are accreting the surrounding gas.

References:

Similar Articles: