We can’t completely grasp Google’s time crystal discovery since it is so large.

Forget about Fuchsia and Google Search. Researchers from Google, Stanford, Princeton and other institutions could have discovered a computer breakthrough so significant we can’t completely grasp it yet. Even Google scientists aren’t convinced whether their time crystal finding is correct. However, if the report is correct, Google may be one of the first corporations to provide the globe with a critical technical improvement in the future. Quantum computers, which can tackle difficult problems with amazing speed and power using technologies that have yet to be created, will require time crystals as a building component.

What is a quantum computer?

Google isn’t the only business working on quantum computers, and these devices continue to make headlines daily. Quantum computers won’t be able to reach your phone, and they won’t be able to play games with you. Even if they did, Nintendo’s future systems will be completely devoid of the latest computing technologies.

According to The Next Web, we intend to use quantum computers to solve difficult issues. Warp drives, for example, might allow for rapid interstellar travel. And medical technologies capable of curing almost every ailment.

Earlier this year, Google teamed up with Michael Pea for a quantum computing demonstration at I/O 2021:

Quantum computers, on the other hand, are extremely difficult to create, maintain, and even operate. That’s where Google’s time crystals may be useful. Qubits, or quantum computer bits, are now used in quantum computers. When these qubits are seen, they behave differently than when they are left alone. It’s because of this that measuring qubit states is challenging. Because of this instability, using a quantum computer is difficult. That’s when time crystals enter the picture.

Google’s time crystals

The time crystal idea, first proposed in 2012, is a new phase of matter. According to The Next Web, time crystals defy one of Sir Isaac Newton’s renowned principles. “An object at rest tends to stay at rest, and an object in motion tends to stay in motion,” according to Newton’s first law of motion.

There’s something called high entropy in our cosmos (disorder). Energy transfers constantly cause something to happen. When there are no processes, entropy is constant, but it increases when they are present. However, this is not the case with time crystals. Even when employed in a process, they can preserve entropy.

The Next Web gives a fantastic analogy with snowflakes to explain Google’s time crystals. Because the atoms are organized in precise ways, they have distinct patterns. Snow falls, melts, water evaporates, and ultimately turns back as snow. All of these processes entail energy transfers. A time crystal is analogous to a snowflake that can switch between two configurations without consuming or wasting energy. Time crystals can have their cake and eat it too, and they can do it indefinitely.

What does it mean for you and me?

The time crystals that Google uses do not belong to Google. Even the Google crew is unsure if they were developed by them. The study is only available in pre-print form while it is being peer-reviewed.

However, if Google can figure out how to build them, next-generation quantum computers may include time crystals. These computers might be built by anyone. They’d also bring quantum coherence to a region where there’s a lot of decoherence — the restless qubits we talked about before.

Even yet, the development of quantum computers based on time crystals is still in its infancy. Google may have demonstrated that time crystals aren’t simply a theory, but it hasn’t built any.

To develop warp drives or uncover “universally effective cancer therapies,” we may require decades of quantum computing research to produce quantum computers with time crystals. And it will take decades to fully comprehend quantum computers and time crystals. This is the URL to Google’s paper. Furthermore, Quanta Magazine provides a comprehensive overview of Google’s findings, replete with a time crystal animation.

Thunderbolt 5 might provide up to 80 Gbps of bandwidth, according to an Intel leak.

An Intel manager tweeted and then removed an image exposing some details about the Thunderbolt 5 in development, including the fact that Intel is attempting to quadruple the current Thunderbolt bandwidth limitations to 80 Gbps.

Gregory Bryant, the Intel Client Computing Group’s EVP and GM, published a tweet early Sunday that sparked discussion over Thunderbolt’s future as a communications standard. During a visit to Intel’s Israel Research and Development Center, the manager provided four photographs, but one of them was surreptitiously deleted.

Despite the fact that Thunderbolt isn’t mentioned on the poster, Bryan claims in his tweet that the lab tour was about Thunderbolt. The billboard looks to be promoting Thunderbolt 5, given the near closeness of Intel’s Thunderbolt and USB standards, which allows the Thunderbolt 3 specification to be incorporated in the USB 4 standard.

According to the poster, the connection is “designed to complement the existing USB-C ecosystem,” indicating that Intel would continue to use the USB Type-C connection.

On the poster, the use of “new PAM-3 modulation technology” is highlighted in a unique fashion.

When using NRZ coding, the data line transmits one bit at a time using an electrical signal that alternates between two states. Pulse-Amplitude Modulation 4 (PAM-4) is an option that specifies how two bits can be broadcast at the same time, with the number 4 referring to the number of possible bit pairs.

In PAM-3, a data line can be in one of three states: 0, +1, or -1. The system is informed of a three-bit group via a transmission pair, which is roughly 50% more efficient than NRZ.

Thunderbolt 5 should potentially provide consumers much of the same advantages as Thunderbolt 3, including increased power, video, Thunderbolt networking, and fast bandwidth. Thunderbolt 5’s enhanced bandwidth from 40 to 80 Gbit/s enables quicker file transfers and greater data interchange between connected devices with fewer restrictions.

In California, Argo AI may now provide public rides in its self-driving vehicles.

Argo AI, the Ford and VW-backed autonomous vehicle technology firm, has received a permit in California that will allow it to provide free rides in its self-driving vehicles on public highways.

According to the accepted application, the California Public Utilities Commission gave the so-called Drivered AV pilot permit earlier this month. It was published on its website on Friday, a little over a week after Argo and Ford revealed plans to debut at least 1,000 self-driving cars on Lyft’s ride-hailing network in a variety of locations over the next five years, beginning with Miami and Austin.

The authorization, which is part of the state’s Autonomous Vehicle Passenger Service pilot, adds Argo to a small but increasing group of businesses looking to go beyond standard AV testing — a hint that the industry, or at least some companies, are getting ready to go commercial. Since 2019, Argo has been testing its self-driving technology in Ford cars in Palo Alto. In California, the company’s test fleet consists of around a dozen self-driving test vehicles. In addition to Miami, Austin, Washington, D.C., Pittsburgh, and Detroit, it has autonomous test cars.

Aurora, AutoX, Cruise, Deeproute, Pony.ai, Voyage, Zoox, and Waymo have all been granted licences to participate in the CPUC’s Drivered Autonomous Vehicle Passenger Service Pilot program, which necessitates the presence of a human safety operator behind the wheel. This permission does not allow companies to charge for rides.

Cruise is the only business that has obtained driverless authorization from the CPUC, allowing it to transport people in its test cars without the presence of a human safety operator.

Obtaining a Drivered authorization from the CPUC is only the first step on the road to commercialization in California. Before charging for rides in robotaxis without a human safety operator behind the wheel, the state needs firms to clear several regulatory barriers from the CPUC and the California Department of Motor Vehicles, each with its tiered system of licences.

The DMV is in charge of regulating and issuing licenses for autonomous vehicle testing on public roads. The DMW issues three types of permits, the first of which allows businesses to test autonomous vehicles on public roads with a safety driver. This basic testing authorization is held by more than 60 businesses.

The next permission allows for driverless testing, followed by a commercial deployment permit. Permits for driverless testing, in which no person is behind the wheel, have become the new benchmark and a need for firms looking to start a commercial robotaxi or delivery service in the state. The DMV has issued autonomous permits to AutoX, Baidu, Cruise, Nuro, Pony.ai, Waymo, WeRide, and Zoox.

Nuro is the only person who has gotten deployment permission from the DMV. Nuro may now deploy on a commercial basis thanks to this permission. Nuro’s trucks can only carry freight and not passengers, allowing the firm to avoid the CPUC approval procedure.

Meanwhile, in May 2018, the CPUC approved two pilot projects for autonomous vehicle passenger transportation. The Drivered Autonomous Vehicle Passenger Service Pilot program, which Argo just obtained, permits firms to run a ride-hailing service utilizing autonomous vehicles as long as they adhere to certain guidelines. Companies are not permitted to charge for rides, and a human safety driver must be present at all times. Additionally, specific data must be provided quarterly.

The second CPUC pilot will allow Cruise to launch an autonomous passenger service in June 2021.

It’s worth noting that getting to the holy grail of commercial robotaxis necessitates obtaining all of these permissions from the DMV and CPUC.

Twitter will pay hackers to discover biases in its automated picture cropping after being accused of doing so.

Twitter is running a competition in the hopes of finding biases in its picture cropping algorithm, and the best teams will get cash awards (via Engadget). Twitter hopes that by allowing teams access to its code and picture cropping model, they will be able to identify ways in which the algorithm might be detrimental (for example, cropping in a way that stereotypes or erases the image’s topic).


Those that compete must submit a summary of their results as well as a dataset that can be put through the algorithm to show the problem. Twitter will then give points depending on the kind of damages discovered, the potential impact on people, and other factors.


The winner team will get $3,500, with $1,000 awards granted for the most creative and generalizable results. On Twitter, that figure has sparked some debate, with some people arguing that it should include an extra zero. For example, if you discovered a bug that allowed you to execute activities for someone else (such retweeting a tweet or picture) via cross-site scripting, Twitter’s standard bug bounty programme would pay you $2,940. You’d make $7,700 if you could find an OAuth flaw that allowed you to take over someone’s Twitter account.


Twitter had already conducted its own study into its image-cropping algorithm, publishing a paper in May that looked at how the system was biassed in the wake of claims that its preview crops were racist. Since then, Twitter has mainly abandoned algorithmically trimming previews, but it is still utilised on desktop, and a good cropping algorithm is a useful tool for a firm like Twitter.


Opening a competition allows Twitter to receive input from a much wider group of people. For example, the Twitter team had a meeting to discuss the competition, during which a team member stated that they were getting queries about caste-based biases in the algorithm, something that software developers in California may not be aware of.
Twitter is also searching for more than simply unintentional algorithmic bias. Both deliberate and unintended damages have point values on the scale. Unintentional harms are cropping behaviours that may be abused by someone publishing maliciously created photos, according to Twitter. Intentional harms are cropping behaviours that could be exploited by someone posting maliciously intended images.


The competition, according to Twitter’s announcement blog, is distinct from its bug bounty programme; if you submit a complaint regarding algorithmic biases to Twitter outside of the competition, your report will be closed and tagged as not applicable, the company warns. If you’re interested in participating, visit the competition’s HackerOne page to learn more about the rules, qualifications, and other details. Submissions are open until August 6th at 11:59 p.m. PT, and the challenge winners will be revealed on August 9th at the Def Con AI Village.

Why is wildfire smoke potentially worse than other pollutants in the air?

Wildfires in the Western United States have spread smoke across the landscape, posing a rising hazard to public health. The 2020 fire season was so terrible, because of climate change, that it nearly quadrupled the previous record for acres burned in California, and at-home monitoring of the smoke’s impact on air quality became practically ubiquitous. This year’s season is off to a disastrous start, with smoke from West Coast wildfires already darkening skies on the East Coast.

Smoke isn’t your typical form of pollution. According to studies published in the journal Nature earlier this year, the small particles present in smoke can be up to ten times more hazardous to human health than soot from other sources such as tailpipes and factories.

Fine particles, also known as PM2.5, are 30 times smaller than the diameter of a human hair follicle and were studied by researchers. When a fuel, whether it’s gas or plant, burns, tiny particles are released into the air and occasionally into our bodies. According to the study, fine particles from wildfire smoke resulted in 10% more respiratory hospitalizations than they would have been without the smoke. While pollution from other sources is also hazardous, it only increased hospitalizations by around 1%.

Rosana Aguilera, the study’s primary author and a postdoctoral researcher at the University of California, San Diego, spoke in an interview. She explained what she and other academics are doing to learn more about the effects of wildfire smoke on human health.

The interview was slightly altered for clarity.

What are “fine particles,” and why are they a concern?

Fine particles were investigated by the research group I work in since they are one of the primary components of wildfire smoke. These particles are distinct from others. Their chemical makeup varies depending on the items being burned. There are a variety of chemicals that may be found in wildfire smoke and fine particles, including carbon and heavy metals.

We’re concentrating on these small particles found in wildfire smoke right now because wildfire smoke is becoming increasingly prevalent as a source of emissions in various parts of the United States and the world. It’s one form of air pollution in California that appears to be on the rise in the foreseeable future. Some articles support the notion that wildfire smoke will be one of the primary sources of fine particulate matter in areas such as the Western United States.

What kind of impact may such tiny particles have on people’s health?

Because it’s tiny enough to infiltrate our respiratory system and reach deep into the lungs, it’s one of the air pollutants to be concerned about. It might enter the circulation and spread to other organs from there. It can make breathing difficult. It can irritate the skin and aggravate illnesses such as asthma and other respiratory and cardiopulmonary problems.

We mostly deal with acute impacts, which are the reactions that occur after being exposed to wildfire smoke for a few days. My study group isn’t focusing on long-term impacts right now, but I believe it’s an issue that needs to be explored more. Long-term exposure is more difficult to study since it requires following individuals who have been exposed to several wildfires.

So, how does wildfire smoke compare to other sources of pollution like vehicles, trucks, and industry?

When comparing wildfire smoke to non-smoke fine particles, we discovered that wildfire smoke is more hazardous in terms of increased hospitalizations.

The mix of traffic emissions and wildfire smoke may be extremely different. We haven’t looked at the chemical makeup of these tiny particles concerning their origins. However, several toxicological studies have delved into this further and shown that wildfire smoke toxicity may be enhanced. If it passes through a structure, it may pick up pollutants from homes and other structures.

What do you want to achieve with your research?

We’d like to investigate these differential effects of fine particles concerning emission sources, as well as try to learn more about the chemical makeup of various wildfires.

If wildfire smoke has a higher impact, and if it will be one of the primary sources of this sort of pollution in the future — or if it currently is — we need to learn more about why it is more damaging. Then, what kind of long-term impact can we expect?

Features you’ll lose when upgrading from Windows 10 to Windows 11.

When Windows 11 is released later this year, it will include a new design, new colours, and new functions. However, not everything in Windows 10 will be preserved after the upgrade.
Between now and the public release of Windows 11, expect a few feature additions and subtractions, but here’s all we know about what will be lost along the road.

Timeline

Perhaps you’ve never used Timeline, which is one of the reasons it’s being phased out in Windows 11. The function allows you to sync your activity over the previous 30 days across different Windows PCs (files you’ve opened, websites you’ve visited, and so on), making it quicker to switch between devices registered in with the same Microsoft account.

Timeline will not be available on Windows 11. Screenshot: Windows 10

Live Tiles

The Live Tiles feature on the Windows 10 Start menu, which allows different bits of information to be presented and updated in real time, was not well received by developers. You’d be correct if you thought that sounded a lot like widgets. However, with Windows 11, Microsoft will attempt to bring back desktop widgets, so let’s hope they perform better than Live Tiles.

Start Menu Groups 

Another feature borrowed from the Start menu is the ability for users to organise and name tiles in categories like as productivity, writing, gaming, and so on. The Start menu’s layout will also not be resizable, implying that Microsoft intends to make the Start menu experience the same for everyone (as well as move it into the centre of the screen).

In Windows 11, tile grouping and naming are no longer available in the Start menu. Screenshot:Windows 10

Internet Explorer

What exactly is it? Didn’t you think it was already dead? It’s still available in Windows 10 if you look hard enough, but in Windows 11, all traces of Internet Explorer will be gone, and Microsoft Edge will take its place. Use the IE mode in Edge for those really, very ancient legacy programmes and sites you still require access to for whatever reason.

Cortana

Although Microsoft’s digital assistant will not be completely removed from Windows 11, it will be removed from the setup process and will no longer be pinned to the taskbar. It’s unclear what Microsoft has planned for Cortana, but based on the capabilities introduced to it in the previous year or so, it may be recast as a business tool.

In Windows 11, Cortana will be less prominent. Screenshot: Windows 10

Skype

Skype will continue to be available in Windows 11, but it will not be included as an integral member as it is in Windows 10. That’s because Microsoft has shifted its attention to Teams as a solution for all of your communication requirements, including video, so expect a lot of tight Teams connections in the final Windows 11 experience.

Tablet Mode

Although Windows 10 works well on tablets like the Surface Pro as well as complete desktop and laptop computers, Windows 11 will not have a specific mode for tablet devices. Rather, this functionality will be redesigned, with part of it occurring automatically (like when you attach or detach a Bluetooth keyboard, for example).

Taskbar Location

In terms of removing customizations, the taskbar in Windows 11 can only be found at the bottom of the screen. You may not have known it, but Windows 10 allows you to move the taskbar to the left, right, or even to the top of the screen. You’re out of luck if you enjoy tinkering with your operating system.

You may move the taskbar in Windows 10 if you haven’t noticed. Screenshot: Windows 10

Quick Status

Applications in Windows 10 can leave little blocks of information on the lock screen to remind you of incoming emails, impending calendar appointments, and so on. When Windows 11 ships, this feature, known as Quick Status, will be unavailable to applications, however widgets (see above) may be able to fill the void.

Windows S Mode

This is another feature that isn’t going away altogether, although you’ll see it less frequently: S Mode, which improves speed and security by only allowing programmes from the official Microsoft Store to be installed, will only be available in Windows 11 Home version. S Mode is now available for Windows 10 Home and Windows 10 Pro.

Malware hiding in AI neural networks

A trio of Cornell University academics discovered that malware code may be hidden inside AI neural networks. On the arXiv preprint server, Zhi Wang, Chaoge Liu, and Xiang Cui have published a paper outlining their experiences with inserting code into neural networks.

Criminals’ attempts to get into devices running new technology for their objectives, such as deleting or encrypting data and demanding payment from customers for its recovery, are becoming more complicated as computer technology becomes more complex. The researchers discovered a new technique to infect specific types of computer systems running artificial intelligence applications in their new study.

AI systems function by processing data in the same manner that the human brain does. However, the study team discovered that such networks are vulnerable to foreign code intrusion.

Foreign actors can infiltrate neural networks by their very nature. All such agents have to do is imitate the network’s structure, similar to how memories are added to the human brain. The researchers were able to accomplish so by embedding malware into the neural network powering an AI system dubbed AlexNet, despite the virus is very large, taking up 36.9 MiB of RAM on the AI system’s hardware. The researchers picked what they thought would be the optimum layer for injection to inject the code into the neural network. They also added it to a model that had previously been taught, although they cautioned that hackers may choose to target an untrained network since it would have less impact on the entire network.

Not only did ordinary antivirus software fail to detect the malware, but the AI system’s functionality remained nearly unchanged after infection, according to the researchers. As a result, if carried out surreptitiously, the infection may have gone unnoticed.

The researchers point out that merely inserting malware into the neural network would not be harmful—whoever snuck the code into the system would still need to figure out how to run it. They also point out that now that hackers can insert code into AI neural networks, antivirus software may be upgraded to detect it.

The latest autonomous drone technology and its capabilities

The scout drone 137

American Robotics’ autonomous drone has been certified by the Federal Aviation Administration, making it the first federally licensed drone on the market.

Drones that operate independently are a significant technical advancement. Not for domestic use because safety is still an issue, but this could boost productivity in a variety of industries because it’s nearly impossible to have someone operate multiple drones from day to night all of the time.

The autonomous drone is a fully integrated system that automates everything from landing to charging to data processing, making it an all-in-one solution.

Scout, the AI-powered autonomous drone, Soutbase, the weatherproof charging, and edge computing station, and Scoutview, the fleet management, and analytics software, are the solution’s three key components.

The Scout base is where the Scout is charged and data is processed. Scoutview allows businesses to monitor and communicate with drones without the need for a human operator.

The drone is equipped with visual, multispectral, and infrared cameras, making data collection quick and straightforward. The acquired data may be accessed instantaneously in real-time. The Scout systems will be able to perform missions independently after the installation is complete, collecting, processing, and analyzing data.

Demands for Autonomous Drones and the Market

Drones that can be used for commercial purposes have a huge market. Its TAM is expected to be worth 100 billion dollars (total addressable market). Drones might thus be utilized in a variety of areas, including industry, agriculture, and defense.

It might be used in industrial markets for asset inspection, tracking, security, and safety. It may be used for weed identification, disease detection, plant counting, research, harvest planning, and harvest timing in the agricultural market.

You’re in luck if you’re seeking surveillance and reconnaissance in the defense industry! As a consequence, these markets and sectors may use autonomous drones to perform work in broad fields that are difficult to analyze swiftly by people. It also makes data collection easier thanks to its integrated software and solutions.

Ondas has bought the Software Defined Radio platform for Mission Critical IoT applications. To manage thousands of connected devices over long distances,

Ondas provides a choice of trustworthy and secure broadband networks. With the help of Ondas’ high-bandwidth network, American Robotic’s autonomous drones will be able to send and receive long-range data, with thousands of drones continually gathering and processing high-resolution data.

This, we believe, is the way industrial data will be collected in the future. The combined company can provide the ultimate autonomous drone with unrivaled capabilities that can boost production in a variety of sectors.

Unveiling of a 100-Qubit Quantum Computing System

Atom Computing, a quantum computing firm, has announced the development of a quantum computing machine with unrivaled capabilities. The Phoenix system, which is in its initial iteration, can hold up to 100 qubits and is touted to be ‘exceptionally’ stable with lengthy coherence periods, allowing for high performance. Separately, the firm reported the receipt of approximately $15 million in Series A investment and the appointment of a new CEO.

With optical tweezers, Atom Computing’s Phoenix can capture 100 atomic qubits (of an alkaline earth element) in a vacuum container. Lasers are then used to alter the quantum states of atomic qubits. Atom Computing’s Phoenix, according to the firm, is ideal for complicated calculations since its qubits are exceptionally robust and have very long coherence times (over 100 ms).

Using optical tweezers to manipulate atomic qubits in a vacuum environment is not a novel concept. Although Honeywell sells similar devices, their quantum computers only have six qubits. According to Atom Computing, their laser technology and platform design enable the scalability of the number of qubits to 100 units. The firm must, however, demonstrate such a system.

“The development of quantum computing has advanced to the point that it is no longer a decade away. Because of our systems’ scalability and reliability, we are certain that we will be able to lead the industry to genuine quantum advantage “Atom Computing’s CEO and President, Rob Hays, stated. “We’ll be able to solve complicated problems that were previously impossible to handle with traditional computing, even with Moore’s Law’s exponential performance improvements and massively scalable cluster designs.”

Atom Computing announced that it has raised $15 million in Series A investment from venture capital companies Venrock, Innovation Endeavors, and Prelude Ventures, in addition to providing the first information about its Phoenix quantum computing system. The funds will go toward the construction of the Phoenix quantum computing system.

Rob Hays has also been named CEO and President of the firm. Hays previously worked at Intel for 20 years, establishing the company’s Xeon roadmap. Later in his career, he worked at Lenovo, where he established the company’s data center product and service strategy.

The Olympic flame in Tokyo is the first to be fueled by hydrogen.

Naomi Osaka stands near the Olympic torch after igniting it during the opening ceremony of the 2020 Summer Olympics in Tokyo, Japan, on Friday, July 23, 2021. (David J. Phillip/AP Photo)

TOKYO (AP) — The Tokyo Olympic cauldron was inspired by the sun and is built to be more eco-friendly.

Throughout the games, the flame at Tokyo’s National Stadium and another cauldron blazing along the waterfront at Tokyo Bay will be fueled in part by hydrogen, marking the first time the fuel source has been utilized to light an Olympic fire.

Since the first contemporary cauldron was ignited at the Amsterdam Games in 1928, propane has been the most common fuel, but magnesium, gunpowder, resin, and olive oil have all been used. Eight years later, for Berlin, the torch relay was inaugurated.

When hydrogen is burned, unlike propane, it does not create carbon dioxide. The Tokyo cauldron is fueled by hydrogen produced by a renewable-energy-powered facility in Fukushima Prefecture. During the torch relay, both propane and hydrogen were utilized.

The London 2012 Olympic Games organizers boasted about their intentions for a low-carbon torch, but they couldn’t get the design perfect in time. Instead, they utilized a propane-butane mixture. In 2016, Brazilian officials ordered a smaller cauldron for Rio de Janeiro to minimize the quantity of fuel required.

Oki Sato, a Canadian architect, created the Tokyo cauldron. His sun-inspired sphere opens like petals from a flower, evoking “vitality and hope,” according to the organizers.

At 11:48 p.m., tennis player Naomi Osaka ignited the torch, with performers throughout the night clutching sunflowers, which are known for blossoming toward the sun.

The first torch for these games was lighted 16 months ago at Olympia, Greece, however owing to the pandemic, the relay was put on hold for much of 2020. Until the relay was formally begun in Fukushima on March 25, 2021, officials displayed the torch across prefectures impacted by the earthquake and tsunami that destroyed the region in 2011.

Before the torch arrived at the National Stadium in Tokyo’s Shinjuku City, several parts of the relay were halted owing to concerns about the spread of the coronavirus.

Samsung Galaxy A22 5G vs Poco M3 Pro 5G: Price, processor, specifications

Samsung just released the Galaxy A series, which includes its first 5G smartphone. In India, the Samsung Galaxy A22 5G was launched with the MediaTek Dimensity 700 processor. The Galaxy A22 5G has a 5,000mAh battery, a 90Hz display, and a 5G capability for around 20,000 rupees.

Poco M3 Pro 5G is another phone with the same chipset and 5G capability. It even costs ₹4,000 less than the Samsung Galaxy A22 5G and has been on the market for a few months.

Here’s how the two low-cost 5G smartphones with the same chipset match up against one another:

Performance

The MediaTek Dimensity 700 Processor is included in both the Samsung Galaxy A22 5G and the Poco M3 Pro 5G. The SoC has an octa-core arrangement and is built on a 7nm architecture.

The Dimensity 700 has been modified to operate the two high-speed cores at 2.2GHz and the remaining six cores at 2GHz on both phones.

Both phones are powered by a 5,000mAh battery, however, the Poco M3 Pro 5G’s battery may last longer due to the lack of a high refresh rate screen. The Samsung Galaxy A22 5G receives 15W charging, while the Poco phone supports 22.5W fast charging and comes with an 18W charger.

Memory and storage

The Samsung Galaxy A22 5G comes with 8GB of RAM and 128GB of internal storage in a single memory configuration. Poco M3 Pro 5G, on the other hand, comes in two versions: one with 4GB RAM and 64GB storage, and another with 6GB RAM and 128GB storage. The 4GB model of the Poco M3 Pro was released later.

Camera

Both phones feature a triple camera setup on the back and a single selfie camera on the front. A 48MP main camera, as well as 5MP and 2MP sensors, manage photography on the Samsung Galaxy A22 5G. The Poco M3 Pro, on the other hand, has a 48MP sensor and two 2MP sensors. Both phones have an 8-megapixel front camera.

Display

The Samsung Galaxy A22 5G has a 6.6-inch display with a resolution of 1080 x 2408. The screen has a refresh rate of 90Hz.

The screen of the Poco M3 Pro 5G is a 6.5-inch panel with a resolution of 2400×1080 pixels. It has an adaptive sync display with 30Hz, 50Hz, 60Hz, and 90Hz refresh rates.

Price

The Samsung Galaxy A22 5G has a single version that costs ₹19,999. The Poco M3 Pro 5G in 4GB trim costs ₹13,999, while the 6GB model costs ₹15,999.

The ‘holy grail’ of batteries: Scientists create a ‘iron-air’ battery that rusts and stores power for days at a tenth of the price of lithium-ion batteries.

In the United States, an ‘on-air battery has been created that can hold electricity generated by wind or solar power plants for days before gently discharging it to the grid.

Form Energy intends to stack thousands of its “iron-air” batteries together in massive warehouses, as seen in this illustration.

According to Form Energy, a technological firm based in Massachusetts, it will help combat climate change by eliminating the demand for fossil fuel power plants.

The Iron-Air battery is a ‘new kind of cost-effective, multi-day energy storage device,’ capable of supplying electricity for 100 hours at a fraction of the cost of lithium-Ion, the renewable energy technology’s “holy grail.”

It’s constructed of iron, one of the most abundant elements on the planet, and it operates by inhaling oxygen, changing iron to rust, and then converting rust back to iron.

It is charging and discharging the battery as it takes in oxygen and transforms iron back and forth, a process that allows the energy to be held for longer.

According to the company, the batteries are too hefty for use in electric automobiles since they are intended to handle the difficulty of maintaining a consistent power supply.

This will address one of the renewable energy’s most vexing problems: how to store vast amounts of electricity inexpensively and provide it to power networks when the sun isn’t shining for solar panels or the wind isn’t blowing for turbines.

Solar and wind resources are the cheapest sources of energy in much of the globe, but unlike fossil fuel power plants, they do not provide a consistent supply.

The electric grid now has to figure out how to deal with this supply unpredictability while maintaining electricity dependability and cost.

According to Form Energy, their innovative battery technology is the answer to this rising problem.

According to Mateo Jaramillo, CEO and Co-Founder of Form Energy, the company performed a thorough analysis of all existing technologies and ultimately redesigned the iron-air battery.

This was done to ‘optimize it for multi-day energy storage for the electric grid,’ according to the researchers.

According to the company, the battery they are creating would enable governments to retire thermal assets such as coal and natural gas power facilities completely.

‘We’re attacking the largest obstacle to deep decarbonization with this technology: making renewable energy accessible when and where it’s required, even over several days of harsh weather or grid disruptions,’ Jaramillo explained.

It will also be less expensive, according to the company. Nickel, cobalt, lithium, and manganese minerals are used in lithium-ion battery cells, which can cost up to $80 per kilowatt-hour of storage.

Form hopes to reduce mineral prices for each cell to less than $6 per kilowatt-hour by using iron and to maintain the cost of a whole battery system to less than $20 per kilowatt-hour of energy storage by cramming them into a full battery system.

Renewables will eventually be able to replace traditional fossil-fuel power facilities at this pricing range, according to experts.

Breakthrough Energy Ventures, a climate investment fund sponsored by Bill Gates, Jeff Bezos, and others, is one of the firm’s investors. They’ve also received investment from ArcelorMittal, the world’s largest iron-ore producer.

HOW DOES AN IRON-AIR BATTERY WORK?

According to the company, the primary concept of functioning is reversible rusting.

The battery takes in oxygen from the air during discharging and transforms ferrous metal into rust.

The introduction of an electrical current during charging transforms the rust to iron, and the battery exhales oxygen.

They collect energy from renewable sources, store it for up to 150 hours, and then discharge it to the grid when renewables are unavailable.

Each battery is just around the size of a washing machine.

Each of these modules is filled with a non-flammable, water-based electrolyte similar to that found in AA batteries.

Stacks of 10 to 20 meter-scale cells, which comprise iron electrodes and air electrodes, the elements of the battery that enable electrochemical processes to store and release electricity, are contained within the liquid electrolyte.

Thousands of battery modules are joined together in modular megawatt-scale power blocks, which are housed in an environmentally protected container.

Tens to hundreds of these power blocks will be linked to the energy grid, depending on the scale of the system.

A one-megawatt system, in its most compact form, takes roughly one acre of land.

3MW/acre can be achieved with higher density designs.

How To Fix Windows 11 BSOD (Black Screen of Death)

The Blue Screen of Death is the one screen that all Windows users are terrified of. The BSOD has been there for decades, and while it hasn’t changed much over the years, it’s still powerful enough to make users’ hearts skip a beat whenever they see it. Our Windows blues look to be becoming black now.

Here’s all you need to know about Windows 11’s Black Screen of Death, its causes, and how to solve the problems that could be causing it.

For BSOD in Windows 11, black is the new blue.

Multiple sources have verified that the classic Blue Screen of Death is receiving a facelift, but just on the surface, with the ‘B’ for Blue being replaced with the ‘B’ for Black.

The change is minor, but the goal is to make the BSOD resemble the colours of Windows 11’s start and shutdown displays. Microsoft has already experimented with altering the colour of the BSOD screen, but no final decision has been made. Some customers have reported seeing green screens or even red screens of death due to hardware problems in prior Windows 10 versions. However, for the most of Windows’ existence, the crash screen has been blue, to the point that many people find the familiarity of the blue screen reassuring.

Microsoft has yet to remark on the new colour scheme. It’s conceivable that the black screen of death is still in the words and will appear in future Windows 11 versions as a replacement. Regardless, it will be referred to as BSOD.

5 ways to fix Black Screen of Death on Windows 11

Method 1 : Run Windows Memory Diagnostic

Internal components of your computer, such as RAM sticks, can potentially fail or come undone, resulting in a BSOD. You should perform memory diagnostics to see whether that’s the problem. To do so, enter the RUN box by pressing Win+R, then typing mdsched.exe and pressing Enter.

You will be prompted to restart your computer. Click on Restart now and check for problems (recommended).

Your computer will restart when the test is finished, and you should be able to check the results once it has started up. If you don’t notice the results of the memory tests right immediately, you may have to look for them yourself. To do so, right-click the Start button and pick Event Viewer from the menu that appears.

Then click on Windows Logs and double-click on System.

Now find the most recent MemoryDiagnostic file.

If the results show that the memory test detected no problems, then you can rule this out as the core of the problem.

If, on the other hand, the memory test returns changing RAM numbers, it’s an indication that you’re dealing with defective RAM. It’s possible that you’ll have to either reslot the RAM sticks or replace them entirely.

Method 2 : Note the BSOD error code

The BSOD displays your computer’s error message, along with a sad emoji, a link to Microsoft’s bluescreen troubleshooting website, a QR code, and a stop error code.

These error codes are meant to point you in the direction of the probable reasons of the BSOD. In principle, one may pull out their phone, scan the QR code (or at the very least write down the Stop code), and be brought to the Windows troubleshooting website.

There, you can go through the steps to find out the potential causes of the problem and how to go about fixing them. 

Method 3 : Uninstall recent updates

After a system upgrade, black screen problems might also occur. Windows upgrades aren’t without flaws. Despite the fact that they are meant to keep your system up to date with the newest software and device drivers, they can occasionally cause difficulties in otherwise reliable systems.

If you started getting BSOD crashes after installing an update, you may wish to undo it. To see your update history, go to Settings and press Win+I. From the left side, select Windows Update.

Then select Update history on the right.

You can see all of the most recent updates that your system has received right here. Scroll down and click Uninstall updates under ‘Related Settings‘ to uninstall recent updates.

On the next screen, select the most recent updates and click on Uninstall.

Once they’re uninstalled, restart your system. 

Method 4 : Boot into safe mode

If the problem isn’t too serious, you might be able to identify and troubleshoot it using the methods listed above. However, if the black screen of death prevents you from running your system normally, you may need to boot your computer into safe mode.

To do so, press Win+R to open the RUN box, type ‘msconfig’, and hit Enter.

This will open up the System Configuration. Click on the ‘Boot’ tab. 

Then click on Safe boot under ‘Boot options’.

Select Minimal and hit OK.

Restart your computer to boot it up in Safe Mode. Your system will only load the bare minimum of Windows settings and programmes that require it to function properly, avoiding third-party software. If you can operate in safe mode without experiencing any black screen errors, it’s conceivable that a service or software is causing the issue.

To discover the culprits, perform malware scans and then use system restore to restore your computer to a prior state.

Method 5 : Run a System Restore

Your black screen of death issues should be resolved by restoring your system to a prior state. It’s also not that tough to accomplish. To do so, hit Start, type “recovery,” and then choose the option listed below.

Then click on Open System Restore.

Click on Next.

Then select a restore point and click Next.

Now click on Finish to confirm your restore point.

Any apps and drivers you’ve installed since the restore point was created will be uninstalled when you run system restore. This should fix your black screen of death issues and get your machine back up and running.

Netflix’s gaming expansion begins on mobile devices.

Netflix started in its second-quarter earnings report on Tuesday that its early gaming efforts will be centered on mobile games and that the games will be included with customers’ Netflix subscriptions. The news comes just days after the firm announced the hiring of Mike Verdu, a former EA, and Oculus executive, to lead its gaming efforts.

From Netflix’s letter to investors, here’s what it said about gaming:

We’re also in the early phases of extending into games, drawing on our previous interactive initiatives (e.g., Black Mirror Bandersnatch) and Stranger Things games. We see gaming as a new content category for us, in the same way, that we’ve expanded into original films, animation, and unscripted television. Games, like films and shows, will be included in customers’ Netflix subscriptions at no additional cost. We’ll start by concentrating on games for mobile devices. We’re as thrilled as ever about our movie and TV series offerings, and we foresee a long runway of increased investment and growth across all of our existing content categories, but now that we’ve been at it for almost a decade, we believe it’s time to learn more about how our members value gaming.

Although Netflix just expanded its TV contract with Shonda Rhimes to include feature films and gaming content, there are still no indications on what sorts of games would be accessible. There’s also no indication of how Netflix subscribers will get their games.

In the past, the business has admitted that it competes for time and attention with games, with co-CEO Reed Hastings stating in 2019 that “we compete with (and lose to) Fortnite more than HBO.” In April, as part of the company’s first-quarter reporting, COO Greg Peters mentioned the interest of the company in gaming (PDF). And with games like Black Mirror: Bandersnatch and Carmen Sandiego, the firm has already dabbled with gaming.

However, it appears like Netflix is now more interested in gaming than ever, with the recent appointment of Verdu and the fresh details revealed Tuesday about its first intentions.

The Pegasus spyware hack reveals that Apple needs to substantially improve iPhone security.

Apple has always been proud of the secure service it provides to its customers. It often pokes fun at Android, speaks at length about privacy during keynotes, and has released few features that have irritated the other Big Tech companies. However, the new Pegasus spyware disclosure has left Apple red-faced, indicating that the Cupertino-based tech company has to beef up its security. Journalists and human rights campaigners from all around the world, including India, were targeted by the malware.

The Amnesty International Security Lab discovered evidence of Pegasus infections or attempted infections in 37 of the total 67 cellphones examined. 34 of them were iPhones, with 23 displaying evidence of a successful Pegasus infection and the other 11 displaying signs of an attempted infection.

Only three of the 15 Android cellphones, on the other hand, revealed signs of a hacking effort. However, there are two things to consider before assuming that Android phones are safer than iPhones. One, Amnesty’s investigators confirmed that Pegasus evidence was located on the iPhone more than anywhere else. Android’s logs aren’t large enough to retain all of the data required for decisive findings. People have greater security expectations than the iPhone, for two reasons.

Apple has often said in previous years that the iPhone is a more secure phone than Android, and this assertion holds whether Pegasus is there or not. However, the Pegasus tale demonstrates that the iPhone is not as secure, or rather unhackable, as Apple claims. This is reflected in Amnesty International’s statement.

The issue is especially concerning because it affected even the most recent iPhone 12 devices running the most recent version of Apple’s operating system. That’s usually the best and last level of protection a smartphone maker can provide.

“Apple strongly opposes cyberattacks against journalists, human rights advocates, and anyone working to make the world a better place,” Ivan Krstic, head of Apple Security Engineering and Architecture, said in a statement to India Today Tech. Apple has led the industry in security innovation for over a decade, and as a consequence, security experts believe that the iPhone is the safest and most secure consumer mobile device available. Such attacks are very complex, cost millions of dollars to create, have a short shelf life, and are used to target specific persons. While this means they pose no harm to the vast majority of our users, we continue to work diligently to secure all of our customers, and we’re always implementing additional safeguards for their devices and data.”

How did the iPhone’s security get hacked?

Pegasus zero-click assaults were used to hack the iPhones, according to the study. It claims that thousands of iPhones have been infected, but it cannot confirm the exact number of phones that have been affected. ‘Zero-click’ assaults, as the name implies, do not involve any activity from the phone’s user, giving an already strong virus even more potential. These attacks target software that accepts data without first determining whether or not it is trustworthy.

In November 2019, Google Project Zero security researcher Ian Beer uncovered a similar vulnerability, revealing that attackers may take total control of an iPhone in the radio vicinity without requiring any user input. Apple released a software update to remedy the problem but confessed that it was powerful enough to damage the devices.

Because zero-click attacks don’t involve any user interaction, avoiding them becomes extremely tough. Even if you are aware of phishing attempts and use the best online practices, you may still be targeted by this malware.

What does Pegasus have access to?

While there is an amount of data on who was impacted and how they were affected, no investigation has been able to uncover the data that was gathered. However, the options are limitless. Pegasus may gather emails, call logs, social network posts, user passwords, contact lists, photos, videos, sound recordings, and browser history, among other things.

It also can turn on the cameras or microphones to acquire new photos and recordings. It can listen to voice mails and gather location records to figure out where a user has gone, and it can do all of this without the user accessing their phone or clicking on a strange link.