Pegasus Spyware

Recently, a global collaborative investigative effort titled the Pegasus project, revealed that Israeli company NSO Group’s Pegasus spyware targeted over 300 mobile phone numbers in India. As per reports, at least 40 journalists, Cabinet Ministers, and holders of constitutional positions were possibly subjected to surveillance. The reports are based on a leaked global database of 50,000 telephone numbers.

What is Pegasus?


It is spyware created by NSO Group, an Israeli cybersecurity firm founded in 2010.The NSO Group’s founders come from Unit 8200 – Israel’s elite defense force. It is also the Israel Defence Force’s largest military unit and probably the foremost technical intelligence agency in the world.Pegasus spyware can hack any iOS or Android device and steal a variety of data from the infected device.It works by sending an exploit link and if the target user clicks on the link, the malware or the code that allows the surveillance is installed on the user’s phone.Pegasus can be deleted remotely. It’s very hard to detect and once it’s deleted, leaves few traces.It can also be used to plant messages/mails which is why there are theories it may have been used to plant fake evidence to implicate activists in the Bhima Koregaon case.
Pegasus is designed for three main activities:
1 collection of historic data on  a device without user knowledge
 2 continuous  monitoring of activity and gathering of personal information and
3 transmission of this data to third parties.
Israel identifies Pegasus as a cyberweapon and claims that its exports are controlled.

Pegasus spyware has evolved from its earlier spear-phishing methods using text links or messages to ‘zero-click’ attacks which do not require any action from the phone’s user. It is the worrying aspect of spyware.

  • It helps spyware like Pegasus to gain control over a device without human interaction or human error.
  • Most of these attacks exploit software that receives data even before it can determine whether what is coming in is trustworthy or not, like an email client.
  • They are hard to detect given their nature and hence even harder to prevent. Detection becomes even harder in encrypted environments, where there is no visibility on the data packets being sent or received

The Kardashev scale – Classifying Alien civilization

The observable universe is consists up to two trillion galaxies that are made of billions and billions of stars. In the Milky Way galaxy alone, scientists assume that there are some 40 billion earths like planets in the habitable zone of their stars. When you look at these numbers, there are a lot of possibilities of alien civilization to exist. In a universe that big and old, the possibilities of civilizations may start millions of years apart from each other, and develop in different directions and speed. So their civilization may range from cavemen to super advanced. We know that human started out with nothing and then making tools, building houses, etc. we know that humans are curios, competitive, greedy for resources, and expansionists. The more of these qualities that our ancestors had, the more successful they were in the civilization building process.

 Like this, the other alien civilizations also must have evolved. Human progress can be measured very precisely by how much energy we extracted from our environment. As our energy consumption grew exponentially, so did the abilities of our civilization. Between 1800 and 2015, population size had increased sevenfold; while humanity was consuming 25 times more energy. It’s likely that this process will continue into the far future. Based on these facts, scientist Nikolai Kardashev developed a method for categorizing civilizations, from cave dwellers to gods ruling over galaxies into a scale called the Kardashev scale. It is a method of ranking civilizations by their energy use. It put civilizations into four categories. A type 1 civilization is able to use the available energy of their home planet. A type 2 civilization is able to use the available energy of their star and planetary system. A type 3 civilization is able to use the available energy of their galaxy. A type 4 civilization is able to use the available energy of multiple galaxies

. It’s like comparing an ant colony to a human metropolitan area. To ants we are so complex and powerful, we might as well be gods. On the lower end of the scale, there are type 0 to type 1 civilization. Anything from hunting, gatherers to something we could achieve in the next few hundred years. These might actually be abundant in the Milky Way. If that possible, why they are not sending any radio signals in space. But even if they transmitted radio signals like we do, it might not be very helpful. In such a vast universe, our signals may extend over 200 light years, but this is only a tiny fraction of the Milky Way. And even if someone were listening, after a few light years our signals decay into noise, impossible to identify as the source of an intelligent species. Today humanity ranks at about level 0.75. We created huge structures, changed the composition and temperature of the atmosphere. If progress continues, we will become a full type 1 civilization in the next few hundred years. The next step to type 2 is trying and mine other planets and bodies.

 As a civilization expands and uses more and more stuff and space, at some they may start a largest project that extracting the energy of their star by building a Dyson swarm. Once it finished, energy has become unlimited. The next frontier moves to other stars light years away. So the closer a species gets to type 3, they might discover new physics, may understand and control dark matter and energy, or be able to travel faster than light. For them, humans are the ants, trying to understand the galactic metropolitan area. A high type 2 civilization might already consider humanity too primitive. A type 3 civilization might consider us bacteria. But the scale doesn’t end here; some scientists suggest there might be type 4 and type 5 civilizations, whose influences stenches over galaxy clusters or super clusters. This complex scale is just a thought experiment but, still it gives interesting things. Who knows, there might be a type omega civilization, able to manipulate the entire universe, and they even might be the actual creators of our universe.

James Webb space telescope – Working and Application

The James Webb space telescope or JWST will replace the Hubble space telescope. It will help us to see the universe as it was shortly after the big bang. It was named after the second head of NAS James Webb. James Webb headed the office of space affairs from 1961 to 1968. This new telescope was first planned for launch into orbit in 2007 but has since been delayed more than once, now it’s been scheduled for 18 December 2012. After 2030 the Hubble will go on a well deserved rest since its launch in 1990 its provided more than a million images of thousands of stars, nebulae, planets and galaxies. The Hubble captured images of stars that are show about 380 million years after the big bang which supposedly happened 13.7 billion years ago. These objects may no longer exist, we still see their light. Now we expect James Webb to show us the universe as it was only 100 to 250 million years after its birth. It can transform our current understanding of the structure of the universe. The Spitzer space telescope and Hubble telescopes have collected data of gas shells of about a hundred planets. According to experts, the James Webb is capable of exploring the atmospheres of more than 300 different exoplanets.

The working of James Webb space telescope

The James Webb is an orbiting infrared observatory that will investigate the thermal radiation of space objects. When heated to a certain temperature, all solids and liquids emit energy in the infrared spectrum; here there is a relationship between wavelength and temperature. The higher the temperature, there will shorter the wavelength and higher the radiation intensity. James Webb sensitive equipment will be able to study the cold exoplanets with surface temperatures of up to 27° Celsius. An important quality of this new telescope is that it will revolve around the sun and not the earth unlike Hubble which is located at an altitude of about 570 kilometers in low earth orbit. With the James Webb orbiting the sun, it will be impossible for the earth to interfere with it, however he James Webb will move in sync with the earth to maintain strong communication yet the distance from the James Webb to the earth will be between about 374,000 to 1.5 million kilometers in the direction opposite of the sun. So its design must be extremely reliable.

The James Webb telescope weighs 6.2 tones. The main mirror of the telescope is with a diameter of 6.5 meters and a colleting area of 25 square meters, it resembles a giant honeycomb consisting of 18 sections. Due to its impressive size, the main has to be folded for start up; this giant mirror will capture light from the most distant galaxies. The mirror can create a clear picture and eliminate distortion. A special type of beryllium was used in the mirror which retains its shape at low cryogenics temperature. The front of the mirror is covered with a layer of 48.25 grams of gold, 100 nanometers thick; such a coating best reflects infrared radiation. A small secondary mirror opposite the main mirror, it receives light from the main mirror and directs it to instruments at the rear of the telescope. The sunshield is with a length of 20 meters and width of 7 meters. It composed of very thin layers of kapton polyimide film which protects the mirror and tools from sunlight and cools the telescope’s ultra sensitive matrices to 220° Celsius.

The NIRCam- Near Infrared Camera is the main set of eyes of the telescope, with the NIRCam we expect to be able to view the oldest stars in the universe and he planets around them. The nurse back near infrared spectrograph will collect information on both physical and chemical properties of an object. And the MIRI mid-infrared instrument will allow you to see stars being born many unknown objects of the Kepler belt. Then the near infrared imager and sliteless spectrograph or NIRIIS camera is aimed at finding exoplanets and the first light of distant objects. Finally the FGS- Fine Guidance Sensor helps accurately point the telescope for higher quality images updates its position in space sixteen times per second and controls the operation the steering and main mirrors. They are planning to launch the telescope with the help of the European launch vehicle Ariana 5 from the kourou Cosmodrome in French Guiana space center. The device is designed for between 5 to 10 years of operation but, it may serve longer. If everything goes well, $10 billion worth of construction and one year of preparation will have finally started in orbit.

 

Medical breakthroughs – Laproscopy

 

Treating illness b using tools to remove or manipulate pats of the human body is an old idea. Even the minor operations carried high risks, but that doesn’t mean all early surgery failed. Indian doctors, at the beginning centuries before the birth of Christ, successfully removed tumors and performed amputations and other operations. They developed dozens of metal tools, relied on alcohol to dull the patient, and controlled bleeding with hot oil and tar. The 20th century brought even more radical change through technology. Advances in fiber optic technology and the miniaturization of video equipment have revolutionized surgery. The laparoscopy is the James Bond like gadget of the surgeon’s repertoire of instruments. Only a small incision through the patient’s abdominal wall is made into which the surgeon puffs carbon dioxide to open up the passage.

 Using a laparoscope, a visual assessment and diagnosis, and even surgery causes less physiological damage, reduces patient’s pain and speeds their recovery leading to shorter hospital stays. In the early 1900s, Germany’s George Kelling developed a surgical technique in which he injected air into the abdominal cavity and inserted a cytoscope – a tube like viewing scope to assess the patient’s innards. In late 1901, he began experimenting and successfully peered into a dog’s abdominal cavity using the technique. Without cameras, laparoscopy’s use limited to diagnostic procedures carried out by gynecologists and gastroenterologists. By the 1980s, improvements in miniature video devices and fiber optics inspired surgeons to embrace minimally invasive surgery. In 1996, the first live broadcast of a laparoscopy took place. A year later, Dr. J. Himpens used a computer controlled robotic system to aid in laparoscopy. This type of surgery is now used for gallbladder removal as well as for the diagnosis and surgeries of fertility disorder, cancer, and hernias.

Hypothermia is a drop in body temperature significantly below normal can be life threatening, as in the case of overexposure to severe wintry conditions. But in some cases, like that of Kevin Everett of the buffalo bills, hypothermia can be lifesaver. Everett fell to the ground with a potentially crippling spinal cord injury during a 2007 football game. Doctors treating him on the field immediately injected his body with a cooling fluid. At the hospital, they inserted a cooling catheter to lower his body temperature by roughly five degrees, at the same time proceeding with surgery to fix his fractured spine. Despite fears that he would be paralyzed, Everett has regained his ability to walk, and advocates of therapeutic hypothermia feel his lowered body temperature may have made the difference. Therapeutic hypothermia is still a controversial procedure. The side effects of excessive cooling include heart problems, blood clotting, and increased infection risk. On the other hand, supporters claim, it slows down cell damage, swelling, and other destructive processes well enough that it can mean successful surgery after a catastrophic injury. Surgical lasers can generate heat up to 10,000°F on a pinhead size spot, sealing blood vessels and sterilizing. Surgical robots and virtual computer technology are changing medical practice. Robotic surgical tools increase precision. In 1998, heart surgeons at Paris’s Broussais hospital performed the first robotic surgery. New technology allows an enhanced views and precise control of instruments.

“After a complex laparoscopic operation, the 65-year-old patient was home in time for dinner”. – Elisa Birnbaum, surgeon

 

History of Steam Engines – Thomas Savery

Thomas Newcomen, a Devonshire blacksmith, developed the first successful steam engine in the world and used it to pump water from mines. His engine was a development of the thermic siphon built by Thomas Savery, whose surface condensation patents blocked his own designs. Newcomen’s engine allowed steam to condense inside a water-cooled cylinder, the vacuum produced by this condensation being used to draw down a tightly fitting piston that was connected by chains to one end of a huge, wooden, centrally pivoted beam. The other end of the beam was attached by chains to a pump at the bottom of the mine. The whole system was run safely at near atmospheric pressure, the weight of the atmosphere being used to depress the piston into the evacuated cylinder.

 Newcomen’s first atmospheric steam engine worked at conygree in the west midlands of England. Many more were built in the next seventy years, the initial brass cylinders being replaced by larger cast iron ones, some up to 6 feet (1.8 m) in diameter. The engine was relatively inefficient, and in areas where coal was not plentiful was eventually replaced by double-acting engines designed by James Watt. These used both sides of the cylinder for power strokes and usually had separate condensers. James watt was responsible for some of the most important advances in steam engine technology.

In 1765 watt made the first working model of his most important contribution to the development of steam power, he patented it in 1769. His innovation was an engine in which steam condensed outside the main cylinder in a separate condenser. The cylinder remained at working temperature at all times. Watt made several other technological improvements to increase the power and efficiency of his engines. For example, he realized that, within a closed cylinder, low pressure steam could push the piston instead of atmospheric air. It took only a short mental leap for watt to design double-acting engine in which steam pushed the piston first one way, then the other, increasing efficiency still further.

Watt’s influence in the history of steam engine technology owes as much to his business partner, Matthew Boulton, as it does to his own ingenuity. The two men formed a partnership in 1775, and Boulton poured huge amount of money into watt’s innovations. From 1781, Boulton and watt began making and selling steam engines that produced rotary motion. All the previous engines had been restricted to a vertical, pumping action. Rotary steam engines were soon the most common source of power for factories, becoming a major driving force behind Britain’s industrial revolution.

By the age of nineteen, Cornishman Richard Trevithick worked for the Cornish mining industry as a consultant engineer. The mine owners were attempting to skirt around the patents owned by James Watt. William Murdoch had developed a model steam carriage, starting in 1784, and demonstrated it to Trevithick in 1794. Trevithick thus knew that recent improvements in the manufacturing of boilers meant that they could now cope with much higher steam pressure than before. By using high pressure steam in his experimental engines, Trevithick was able to make them smaller, lighter, and more manageable.

Trevithick constructed high pressure working models of both stationary and locomotive engines that were so successful that in 1799 he built a full scale, high pressure engine for hoisting ore. The used steam was vented out through a chimney into the atmosphere, bypassing watt’s patents. Later, he built a full size locomotive that he called puffing devil. On December 24, 1801, this bizarre-looking machine successfully carried several passengers on a journey up Camborne hill in Cornwall. Despite objections from watt and others about dangers of high pressure steam, Trevithick’s work ushered in a new era of mechanical power and transport.

How do we measure distances in space? Light years

In the 1800s, scientists discovered the realm of light beyond what is visible. The 20th century saw dramatic improvements in observation technologies. Now we are probing distant planets, stars, galaxies and black holes where even light would take years to reach. So how we do that? Light is the fastest thing we know in the universe. It is so fast that we measure enormous distances by how long it takes for light to travel them. In one year, light travels about 6 trillion miles. It is the distance, we call one light year. The Apollo 11 had to travel four days to reach the moon but, it is one light second from earth. Meanwhile, the nearest star beyond our own sun is Proxima Centauri but, it is 4.24 light years away. Our Milky Way galaxy is on the order of 100,000 light years across. The nearest galaxy to our own, Andromeda is about 2.5 million light years away.

 The question is how do we know the distance of these stars and galaxies? For objects that are very close by, we can use a concept called trigonometric parallax. When you place your thumb and close your left eye and then, open your left eye and close your right eye. It will look like your thumb has moved, while more distant objects have remained in place. This same concept applies in measuring distant stars. But they are much farther than the length of your arm, and earth is not large enough, even if you had different telescopes across the equator, you would not see much of a shift in position. So we look at the change in the star’s apparent location over six months, when we measure the relative positions of the stars in summer, and then again in winter, nearby stars seem to have moved against the background of the more distant stars and galaxies.

 But this method only works for objects less than a few thousand light years away. So, for such distances, we use a different method using indicators called standard candles. Standard candles are objects whose intrinsic brightness, or luminosity that we know well. For example, if you know how bright your light bulb is, even when you move away from it, you can find the distance by comparing the amount of light you received to the intrinsic brightness. In astronomy, we consider this as a special type of star called a Cepheid variable. These stars will constantly contract and expand. Because of this, their brightness varies. We can calculate the luminosity by measuring the period of this cycle, with more luminous stars changing more slowly. By comparing the light that we received to the intrinsic brightness we can calculate the distance.

 But we can only observe individual stars up to about 40 million light years away. So we have to use another type of standard candle called type 1a supernova. Supernovae are giant stellar explosions which is one of the ways that stars die. These explosions are so bright, that they outshine the galaxies where they occur. So we can use the type 1 a supernovae as standard candles. Because, intrinsically bright ones fade slower than fainter ones. With the understanding of brightness and decline rate, we can use the supernovae to probe distances up to several billions of light years away. But is the importance of seeing distant objects? Well, the light emitted by the sun will take eight minutes to reach us, which means that the light we see now is a picture of the sun eight minutes ago. And the galaxies are million light years away. It has taken millions of years for that light to reach us. So the universe is in some kind of an inbuilt time machine. The further we can look back, the younger we are probing. Astrophysicists try to read the history of the universe, and understand how and where we come from.

“Dream in light years, challenge miles, walk step by step”William Shakespeare

Why Waves Occur? Waves and Tides

Why do waves form?

A wave begins as the wind ruffles the surface of the ocean. When the ocean is calm and glasslike, even the mildest breeze forms ripples, the smallest type of wave. Ripples provide surfaces for wind to act on, which produces larger waves. Stronger winds push the nascent waves into steeper and higher hills of water. The size a wave reaches depends on the speed and strength of the wind. The length of time it takes for the wave to form, and the distance over which it blows in the open ocean is known as the fetch. A long fetch accompanied by strong and study winds can produce enormous waves. The highest point of a wave is called the crest and the lowest point the trough. The distance from one crest to another is known as the wavelength.

On November 11, 2011, US surfer Garrett McNamara surfed a massive wave (78-foot (23,8-meter)) at Nazaré.

Although water appears to move forward with the waves, for the most part water particles travel in circles within the waves. The visible movement is the wave’s form and energy moving through the water, courtesy of energy provided by the wind. Wave speed also varies; on average waves travel about 20 to 50 Mph. Ocean waves vary greatly in height from crest to trough, averaging 5 to 10 feet. Storm waves may tower 50 to 70 feet or more. The biggest wave that was ever recorded by humans was in Lituya bay on July 9th, 1958. Lituya bay sits on the southeast side of Alaska. A massive earthquake during the time would trigger a mega tsunami and the tallest tsunami in modern times. As a wave enters shallow water and nears the shore, it’s up and down movement is disrupted and it slows down. The crest grows higher and be gins to surge ahead of  the rest of the wave, eventually toppling over and breaking apart. The energy released by a breaking wave can be explosive. Breakers can wear down rocky coast and also build up sandy beaches.

Why does a tide occur?

Tides are the regular daily rise and fall of ocean waters. Twice each day in most locations, water rises up over the shore until it reaches its highest level, or high tide. In between, the water recedes from the shore until it reaches its lowest level, or low tide. Tides respond to the gravitational pull of the moon and sun. Gravitational pull has little effect on the solid and inflexible land, but the fluid oceans react strongly. Because the moon is closer, its pull is greater, making it the dominant force in tide formation.

Gravitational pull is greatest on the side of earth facing the moon and weakest on the side opposite to the moon. Nonetheless, the difference in these forces, in combination with earth’s rotation and other factors, allows the oceans to bulge outward on each side, creating high tides. The sides of earth that are not in alignment with the moon experience low tides at this time. Tides follow different patterns, depending on the shape of the seacoast and the ocean floor.  In Nova Scotia, water at high tide can rise more than 50 feet higher than the low tide level. They tend to roll in gently on wide, open beaches in confined spaces, such as a narrow inlet or bay, the water may rise to very high levels at high tide.

There are typically two spring tides and two narrow tides each month. Spring tie of great range than the mean range, the water level rises and falls to the greatest extend from the mean tide level. Spring tides occur about every two weeks, when the moon is full or new. Tides are at their maximum when the moon and the sun are in the same place as the earth. In a semidiurnal cycle the high and low tides occur around 6 hours and 12.5 minutes apart. The same tidal forces that cause tides in the oceans affect the solid earth causing it to change shape by a few inches.

 

 

Delhi-LSA to break myths regarding health effects of EMF exposure from mobile towers

 Department of Telecommunications (DoT), Delhi License Service Area (LSA) organized an awareness webinar on “EMF Emissions and Telecom Towers” here yesterday. This session was organized as a part of DoT’s public advocacy programme to make the consumers aware about the growing need for mobile towers to build reliable telecom infrastructure and to break myths regarding the health effects of EMF exposure from mobile towers.

The Webinar was addressed by Sh. Nizamul Haq, Advisor, DoT, New Delhi and Sh. Arun Kumar, DDG, DoT, Delhi LSA. Presentation on various aspects of EMF and steps taken by DoT was covered by Sh. Vijay Prakash, Director and Sh. Kamal Deo Tripathi, ADG, DoT, Delhi LSA. Various health related queries and myth about harmful effect of EMF radiations from mobile towers was also clarified by a medical expert, Dr Vivek Tandon, Associate Professor (Neurosurgery), All India Institute of Medical Sciences, New Delhi.

Shri Nizamul Haq, Advisor, DoT, Delhi LSA put a spotlight on the importance of telecommunications as an effective tool for socio-economic development of a nation. It has become core infrastructure for rapid growth and modernization of various sectors of the economy. To provide best quality of telecommunication service to the customers, the expansion of mobile network including tower infrastructure are inevitable.

Shri Arun Kumar, DDG and Shri Vijay Prakash, Director, Delhi LSA further addressed that the EMF emissions from a mobile tower, which are below the safe limits prescribed by International Commission on Non Ionizing Radiation Protection (ICNIRP) and recommended by World Health Organisation (WHO), have no convincing scientific evidence of causing adverse health effects. Various judgements of High Courts of India on the issues of radiation from mobile tower says that there is no conclusive data to show that radiation from mobile tower is in any way harmful or hazardous to the health of citizens.

Dr. Vivek Tandon, Associate Professor AIIMS Delhi clarified various myth about health related issues due to EMF radiations from mobile towers and handsets. Misconceptions among a section of the population around the health hazards of EMF radiations should not override the factual information made available to us through scientific research. Dr Tandon explained various aspects of health related issues in a simplified way and touched various studies and its impact in real life situation.

Department of Telecommunication (DoT), through its field units has already taken necessary steps and adopted stricter norms for safety from EMF radiation that are emitted from mobile towers. DoT has adopted the radiation norms which are 10 times stricter than the norms prescribed by ICNIRP as recommended by WHO. All the information on Mobile tower radiation is available to the public on DoT’s website: https://dot.gov.in/journey-emf

Till date, 46000 mobile base transceiver stations (BTS) have been tested in Delhi LSA and all sites have been found EMF compliant as per DoT norms.

For tower EMF emission visit http://tarangsanchar.gov.in/EMFportal.

Reality Show: Shark Tank India

Shark Tank is an American business reality series that was started on August 9, 2009. In this show, entrepreneurs make their business presentations to a panel of investors which are known as “Sharks”. These “Sharks” decide whether to invest in their company or not. These sharks often find weaknesses and faults in entrepreneurs’ valuation of companies, products, etc. The “Sharks” are paid as cast stars of the show, but the money they invest is their own. The entrepreneur can make a handshake deal on show if a panel member is interested. And if all panel members opt out, the entrepreneur leaves empty-handed. This show is now worldwide and now also continuing is in different countries. For example, Shark Tank India, Shark Tank Australia, Shark Tank Mexico, Shark Tank Colombia, Shark Tank Nepal.

Shark Tank India is an Indian Hindi language business reality show that airs on Sony Entertainment Television. The first season of Shark Tank India premiered on 20 December 2021 and concluded on 4 February 2022. The first season contained 35 episodes. This show was hosted by Ranvijay Singha. The first season received 62,000 aspirants from India, out of which 198 businesses were selected to pitch their ideas to the “Sharks”.

All “Sharks” in Shark Tank India

Ashneer Grover, Managing Director and Founder of “BharatPe”; Aman Gupta, Co-founder and Chief Marketing Officer of “boAt”; Anupam Mittal, Founder and CEO of “Shaadi.com”; Ghazal Alagh, Co-founder and Chief Mama of “MamaEarth”; Namita Thapar, Executive Director of “Emcure Pharmaceuticals”; Piyush Bansal, Co-founder and CEO of “Lenskart”; and Vineeta Singh, CEO and Co-founder of “SUGAR Cosmetics are all “Sharks” in Shark Tank India.

The biggest deal ever is done by Piyush Bansal in Sid07 Designs in Rs. 25 Lakhs for 75% equity and 22 lakhs as debt.

Piyush Bansal: Deal for 75% equity