Emergence of steam engines -The Industrial Revolution

Thomas Newcomen, a Devonshire blacksmith, developed the first successful steam engine in the world and used it to pump water from mines. His engine was a development of the thermic siphon built by Thomas Savery, whose surface condensation patents blocked his own designs. Newcomen’s engine allowed steam to condense inside a water-cooled cylinder, the vacuum produced by this condensation being used to draw down a tightly fitting piston that was connected by chains to one end of a huge, wooden, centrally pivoted beam. The other end of the beam was attached by chains to a pump at the bottom of the mine. The whole system was run safely at near atmospheric pressure, the weight of the atmosphere being used to depress the piston into the evacuated cylinder.

 Newcomen’s first atmospheric steam engine worked at conygree in the west midlands of England. Many more were built in the next seventy years, the initial brass cylinders being replaced by larger cast iron ones, some up to 6 feet (1.8 m) in diameter. The engine was relatively inefficient, and in areas where coal was not plentiful was eventually replaced by double-acting engines designed by James Watt. These used both sides of the cylinder for power strokes and usually had separate condensers. James watt was responsible for some of the most important advances in steam engine technology.

James Watt’s steam engine, 1764

In 1765 watt made the first working model of his most important contribution to the development of steam power, he patented it in 1769. His innovation was an engine in which steam condensed outside the main cylinder in a separate condenser. The cylinder remained at working temperature at all times. Watt made several other technological improvements to increase the power and efficiency of his engines. For example, he realized that, within a closed cylinder, low pressure steam could push the piston instead of atmospheric air. It took only a short mental leap for watt to design double-acting engine in which steam pushed the piston first one way, then the other, increasing efficiency still further.

Watt’s influence in the history of steam engine technology owes as much to his business partner, Matthew Boulton, as it does to his own ingenuity. The two men formed a partnership in 1775, and Boulton poured huge amount of money into watt’s innovations. From 1781, Boulton and watt began making and selling steam engines that produced rotary motion. All the previous engines had been restricted to a vertical, pumping action. Rotary steam engines were soon the most common source of power for factories, becoming a major driving force behind Britain’s industrial revolution.

The world’s first locomotive by Richard Trevithick,1804

By the age of nineteen, Cornishman Richard Trevithick worked for the Cornish mining industry as a consultant engineer. The mine owners were attempting to skirt around the patents owned by James Watt. William Murdoch had developed a model steam carriage, starting in 1784, and demonstrated it to Trevithick in 1794. Trevithick thus knew that recent improvements in the manufacturing of boilers meant that they could now cope with much higher steam pressure than before. By using high pressure steam in his experimental engines, Trevithick was able to make them smaller, lighter, and more manageable.

Trevithick constructed high pressure working models of both stationary and locomotive engines that were so successful that in 1799 he built a full scale, high pressure engine for hoisting ore. The used steam was vented out through a chimney into the atmosphere, bypassing watt’s patents. Later, he built a full size locomotive that he called puffing devil. On December 24, 1801, this bizarre-looking machine successfully carried several passengers on a journey up Camborne hill in Cornwall. Despite objections from watt and others about dangers of high pressure steam, Trevithick’s work ushered in a new era of mechanical power and transport.

Are electric cars the future? The success story of Tesla

In 1834, Robert Anderson of Scotland created the first electric car carriage. The following year, a small electric car was built by the team of professor Stratingh of Groningen, Holland and his assistant, Christopher Becker. More practical electric vehicles were brought onto the road by both American Thomas davenport and Scotsman Robert Davidson in 1842. Both of these inventors introduced non rechargeable electric cells in the electric car. The Parisian engineer Charles Jentaud fitted a carriage with an electric motor in 1881. William Edward Ayrton and John Perry, professors at the London’s city and guilds institute, began road trails with an electrical tricycle in 1882; Three years later a battery-driven electric cab serviced Brighton.

Electric cars during 1890s in the United States

Around 1900, internal combustion engines were only one of three competing technologies for propelling cars. Steam engines were used, while electric vehicles were clean, quiet, and did not smell. In the United States, electric cabs dominated in major cities for several years. The electric vehicle did not fail because of the limited range of batteries or their weight. Historian Michel Schiffer and others maintain, rather, that failed business strategies were more important. Thus, most moor cars in the twentieth century relied on internal combustion, except for niche applications such as urban deliveries. At the end of the century, after several efforts from small manufactures, general motors’ made available on all electric vehicle called EV1 from 1996 to 2003. In the late 1990s, Toyota and Honda introduced hybrid vehicles combining internal combustion engines and batteries.

How Tesla was created?

Entrepreneur Elon Musk is the man behind many modern innovations. It includes the digital payment service PayPal, the independent space travel company SpaceX, and the electric car company Tesla motors. Tesla motors is named after Nikola Tesla, a Serbian American inventor who contributed to the development f alternating current electricity. In 2003 two Silicon Valley engineers, Martin Eberhard and Marc Tarpenning sold their eBook business for 187 million dollars and started Tesla to build a greener car. Elon musk joined as an early investor leading the series finance and taking on several other roles as well. Tesla’s plan was simple but potentially genius. They focused on lithium-ion batteries which they expected to get cheaper and more powerful for many years. They planned to start their journey with a high margin, high performance sports car. Tesla also planned to integrate energy generation and storage in the home and develop other emerging technologies like autonomous vehicles.

Tesla Gigafactory in Tilburg, Netherlands

With this plan set, the company was ready to build a high performance low volume sports car, the roadster. Finally in 2008 Tesla motors released its first car, the completely electric roadster. In 2008, Martin and Marc left the company, and eventually Elon Musk took over as CEO. He made drastic changes, raising 40 million of debt financing and borrowed 465 million from the US government. In 2012 Tesla started focusing on two new cars, model S and model X. beginning in 2012, Tesla built stations called supercharges in the United States and Europe, designed for charging batteries quickly and at no extra cost to Tesla owners. These two models were poised for success but the high cost of lithium ion batteries made it a luxury item. To compensate this, in 2013, Tesla began building large factories called Gigafactories to produce lithium ion batteries and cars n large scale. It made Tesla cars ultimately cheaper than gas powered vehicles. Then Tesla gave autopilot system for its model S which gives semi autonomous capacities. By the end of 2017 Tesla passed ford in market value. Tesla released another crossover he model Y, in 2020. The model Y was smaller and less expensive than the model X and shared many parts with the model 3. Tesla announced several models to be released in the future, including a second version of the Roadster, a Semi trailer truck, a Pick-up truck and the Cybertruck.

The Journey of rocket science from 900 A.D till Now

The history of rocketry dates back to around 900 C.E., but the use of rockets as highly destructive missiles able to carry large payloads of explosives was not feasible until the late 1930s. War has been the catalyst for many inventions, both benevolent and destructive. The ballistic missile is intriguing because it can be both of these things. It has made possible some of the greatest deeds mankind has ever achieved, and also some of the worst. German Walter Dornberger and his team began developing rockets in 1938, but it was not until 1944 that the first ballistic missile, the aggregate-4 or V-2 rocket, was ready for use. V-2 was used extensively by the Nazis at the end of World War II, primarily as an error weapon against civilian targets. They were powerful and imposing: 46 feet (14m) long, able to reach speeds of around 3,500 miles per hour (5600 kph) and deliver a warhead of around 2,200 pounds (1000 kg) at a range of 200 miles (320 km).

The German V-1 of World War II was the world’s first guided missile.

Ballistic missiles follow a ballistic flight path, determined by the brief initial powered phase of the missile’s flight. This is unlike guided missiles, such as cruise missiles, which are essentially unmanned airplanes packed with explosives. This meant that the early V-2 flew inaccurately, so they were of most use in attacking large, city sized targets such as London, Paris, and Antwerp. The Nazi ballistic missile program has had both a great and a terrible legacy. Ballistic missiles such as the V-2 were scaled up to produce intercontinental ballistic missiles with a variety of warheads, but also the craft that have carried people into space. Ballistic missiles may have led us to the point of self destruction, but to venture beyond our atmosphere.

The intercontinental ballistic missiles (ICBM)

 Intercontinental ballistic missiles (ICBM) were first developed by the United States in 1959. It is a guided ballistic missile with a minimum range of 5500 kilometres primarily designed for nuclear weapon. United States, China, France, India, United Kingdom and North Korea are the only countries that have operational ICBMs. The ICBMs has a three stage booster, during the boost phase the rocket get the missile airborne, this phase last around 2 to 5 minutes until the ICBM has reached space. ICBMs have up to three rocket phases with each one ejected or discarded after it burns out.

The DF-41 is currently the most powerful Intercontinental Ballistic Missile (ICBM), developed in China

They use either liquid or solid propellant. The Liquid fuel rockets tend to burn longer in the boost phase than the solid propellant. The second phase of the ICBMs is the point where the rocket has reached space, here it continues along is ballistic trajectory. At this point the rocket will be travelling anywhere from 24,140 and 27,360 kilometres an hour. The final phase is the ICBM’s final separation and re- entry into earth’s atmosphere. The nose cone section carrying the warhead separates from the final rocket booster and drops back to earth. If the ICBM has rocket thrusters, those will be used at this point to orient itself towards the target. It is important that ICBMs have adequate heat shields to survive reentry, if not they burn up and fall apart. It’s important to note that although countries have ICBMs, none have ever been fired in anger against another country.

“This third day of October, 1942, is the first of a new era in transportation that of space travel.” –  Walter Dornberger

The James Webb Space Telescope- World’s most powerful telescope

The James Webb space telescope or JWST will replace the Hubble space telescope. It will help us to see the universe as it was shortly after the big bang. It was named after the second head of NAS James Webb. James Webb headed the office of space affairs from 1961 to 1968. This new telescope was first planned for launch into orbit in 2007 but has since been delayed more than once, now it’s been scheduled for 18 December 2012. After 2030 the Hubble will go on a well deserved rest since its launch in 1990 its provided more than a million images of thousands of stars, nebulae, planets and galaxies. The Hubble captured images of stars that are show about 380 million years after the big bang which supposedly happened 13.7 billion years ago. These objects may no longer exist, we still see their light. Now we expect James Webb to show us the universe as it was only 100 to 250 million years after its birth. It can transform our current understanding of the structure of the universe. The Spitzer space telescope and Hubble telescopes have collected data of gas shells of about a hundred planets. According to experts, the James Webb is capable of exploring the atmospheres of more than 300 different exoplanets.

The main mirror- A giant honeycomb consisting of 18 sections.

The working of James Webb space telescope

The James Webb is an orbiting infrared observatory that will investigate the thermal radiation of space objects. When heated to a certain temperature, all solids and liquids emit energy in the infrared spectrum; here there is a relationship between wavelength and temperature. The higher the temperature, there will shorter the wavelength and higher the radiation intensity. James Webb sensitive equipment will be able to study the cold exoplanets with surface temperatures of up to 27° Celsius. An important quality of this new telescope is that it will revolve around the sun and not the earth unlike Hubble which is located at an altitude of about 570 kilometers in low earth orbit. With the James Webb orbiting the sun, it will be impossible for the earth to interfere with it, however he James Webb will move in sync with the earth to maintain strong communication yet the distance from the James Webb to the earth will be between about 374,000 to 1.5 million kilometers in the direction opposite of the sun. So its design must be extremely reliable.

The James Webb telescope weighs 6.2 tones. The main mirror of the telescope is with a diameter of 6.5 meters and a colleting area of 25 square meters, it resembles a giant honeycomb consisting of 18 sections. Due to its impressive size, the main has to be folded for start up; this giant mirror will capture light from the most distant galaxies. The mirror can create a clear picture and eliminate distortion. A special type of beryllium was used in the mirror which retains its shape at low cryogenics temperature. The front of the mirror is covered with a layer of 48.25 grams of gold, 100 nanometers thick; such a coating best reflects infrared radiation. A small secondary mirror opposite the main mirror, it receives light from the main mirror and directs it to instruments at the rear of the telescope. The sunshield is with a length of 20 meters and width of 7 meters. It composed of very thin layers of kapton polyimide film which protects the mirror and tools from sunlight and cools the telescope’s ultra sensitive matrices to 220° Celsius.

The NIRCam- Near Infrared Camera is the main set of eyes of the telescope, with the NIRCam we expect to be able to view the oldest stars in the universe and he planets around them. The nurse back near infrared spectrograph will collect information on both physical and chemical properties of an object. And the MIRI mid-infrared instrument will allow you to see stars being born many unknown objects of the Kepler belt. Then the near infrared imager and sliteless spectrograph or NIRIIS camera is aimed at finding exoplanets and the first light of distant objects. Finally the FGS- Fine Guidance Sensor helps accurately point the telescope for higher quality images updates its position in space sixteen times per second and controls the operation the steering and main mirrors. They are planning to launch the telescope with the help of the European launch vehicle Ariana 5 from the kourou Cosmodrome in French Guiana space center. The device is designed for between 5 to 10 years of operation but, it may serve longer. If everything goes well, $10 billion worth of construction and one year of preparation will have finally started in orbit.

The deepest image of universe ever taken- Hubble Space Telescope

The Hubble space telescope is the most famous telescope in the world. It was named after the famous astronomer Edwin Hubble who changed our understanding of the universe proving the existence of other galaxies. It is an automatic observatory, has discovered millions of new objects in space. It helped us to witness the birth of new stars, found planets outside the solar system and see super massive black holes. Hubble was launched in 1990, and from December 1993 to may 2009, the telescope was repaired and updated four times. Astronauts visited HST five times in order to make repairs and new instruments.

Hubble holds the record for the longest range of observation. The light from the most distant galaxies has taken billions of years to travel across the universe and reach Hubble. By taking this picture, Hubble was literally looking back in time to the very early universe. You can notice on the right side of the image, there is a galaxy very much like the Milky Way that galaxy is about five billion years away, so we are looking back in time by five billion years. In March 4th, 2016, NASA releases a historic image, one that many believed was impossible. It captured the farthest away of all known galaxies; it’s located about 13.4 billion light years away from us. The light from his galaxy has just reached the earth crossing the distance that separates us; hat is now we can observe it as it was 400 million years after the big bang. This galaxy is 25 times smaller than our galaxy, the Milky Way.  It helped to find the age for the universe now known to be 13.8 billion years, roughly three times the age of earth.

This view of nearly 10,000 galaxies is called the Hubble Ultra Deep Field. The snapshot includes galaxies of various ages, sizes, shapes, and colours. The smallest, reddest galaxies, about 100, may be among the most distant known, existing when the universe was just 800 million years old. The nearest galaxies – the larger, brighter, well-defined spirals and ellipticals – thrived about 1 billion years ago, when the cosmos was 13 billion years old. The image required 800 exposures taken over the course of 400 Hubble orbits around Earth. The total amount of exposure time was 11.3 days, taken between Sept. 24, 2003 and Jan. 16, 2004.

With the advanced camera of the NASA’s Hubble space telescope, it discovered a new planet called Fomalhaut b which orbiting is parent star Fomalhaut. Fomalhaut is 2.3 times heavier and 6 times larger than the sun around it is a disc of cosmic dust which creates the resemblance of an ominous eye. Fomalhaut b lies 1.8 billion miles inside the ring’s inner edge and orbits 10.7 billion miles from its star. Astronomers have calculated that Fomalhaut b completes an orbit around its parent star every 872 years. The Fomalhaut system is 25 light years away in the constellation Piscis Australis. But in April 2020, astronomers began doubting its existence; the planet is missing in the new Hubble pictures. Scientists believe that this planet was a cloud of dust and debris formed as a result of a collision of two icy celestial bodies.

Fomalhaut – The the brightest star in the constellation of Piscis Austrinus

In 1994, Hubble captured the most detailed image of the iconic feature called the pillars of creation. The pillars of creation are fascinating but relatively small feature of the entire eagle nebula. The blue color in the image represent oxygen, red is sulfur, and green represents both nitrogen and hydrogen. The nebula was discovered in 1745 by the Swiss astronomer jean Philippe Loys de Cheseaux, is located 7,000 light years from earth in the constellation Serpens. During its work Hubble has presented millions of images but unfortunately NASA has suspended missions to repair and modernize the telescope. It is assumed that in 2021, Hubble will be replaced with the new James Webb space telescope.

How advanced are Alien civilizations? -The Kardashev scale

The observable universe is consists up to two trillion galaxies that are made of billions and billions of stars. In the Milky Way galaxy alone, scientists assume that there are some 40 billion earths like planets in the habitable zone of their stars. When you look at these numbers, there are a lot of possibilities of alien civilization to exist. In a universe that big and old, the possibilities of civilizations may start millions of years apart from each other, and develop in different directions and speed. So their civilization may range from cavemen to super advanced. We know that human started out with nothing and then making tools, building houses, etc. we know that humans are curios, competitive, greedy for resources, and expansionists. The more of these qualities that our ancestors had, the more successful they were in the civilization building process.

 Like this, the other alien civilizations also might have evolved. Human progress can be measured very precisely by how much energy we extracted from our environment. As our energy consumption grew exponentially, so did the abilities of our civilization. Between 1800 and 2015, population size had increased seven fold; while humanity was consuming 25 times more energy. It’s likely that this process will continue into the far future. Based on these facts, scientist Nikolai Kardashev developed a method for categorizing civilizations, from cave dwellers to gods ruling over galaxies into a scale called the Kardashev scale. It is a method of ranking civilizations by their energy use. It put civilizations into four categories. A type 1 civilization is able to use the available energy of their home planet. A type 2 civilization is able to use the available energy of their star and planetary system. A type 3 civilization is able to use the available energy of their galaxy. A type 4 civilization is able to use the available energy of multiple galaxies

. It’s like comparing an ant colony to a human metropolitan area. To ants we are so complex and powerful, we might as well be gods. On the lower end of the scale, there are type 0 to type 1 civilization. Anything from hunting, gatherers to something we could achieve in the next few hundred years. These might actually be abundant in the Milky Way. If that possible, why they are not sending any radio signals in space. But even if they transmitted radio signals like we do, it might not be very helpful. In such a vast universe, our signals may extend over 200 light years, but this is only a tiny fraction of the Milky Way. And even if someone were listening, after a few light years our signals decay into noise, impossible to identify as the source of an intelligent species. Today humanity ranks at about level 0.75. We created huge structures, changed the composition and temperature of the atmosphere. If progress continues, we will become a full type 1 civilization in the next few hundred years. The next step to type 2 is trying and mine other planets and bodies.

The Dyson sphere – mega-structures built around sun to draw energy

 As a civilization expands and uses more and more stuff and space, at some they may start a largest project that extracting the energy of their star by building a Dyson swarm. Once it finished, energy has become unlimited. The next frontier moves to other stars light years away. So the closer a species gets to type 3, they might discover new physics, may understand and control dark matter and energy, or be able to travel faster than light. For them, humans are the ants, trying to understand the galactic metropolitan area. A high type 2 civilization might already consider humanity too primitive. A type 3 civilization might consider us bacteria. But the scale doesn’t end here; some scientists suggest there might be type 4 and type 5 civilizations, whose influences stenches over galaxy clusters or super clusters. This complex scale is just a thought experiment but, still it gives interesting things. Who knows, there might be a type omega civilization, able to manipulate the entire universe, and they even might be the actual creators of our universe.

“Somewhere, something incredible is waiting to be known.” – Carl Sagan

How we measure extreme distances in space – Light years

In the 1800s, scientists discovered the realm of light beyond what is visible. The 20th century saw dramatic improvements in observation technologies. Now we are probing distant planets, stars, galaxies and black holes where even light would take years to reach. So how we do that? Light is the fastest thing we know in the universe. It is so fast that we measure enormous distances by how long it takes for light to travel them. In one year, light travels about 6 trillion miles. It is the distance, we call one light year. The Apollo 11 had to travel four days to reach the moon but, it is one light second from earth. Meanwhile, the nearest star beyond our own sun is Proxima Centauri but, it is 4.24 light years away. Our Milky Way galaxy is on the order of 100,000 light years across. The nearest galaxy to our own, Andromeda is about 2.5 million light years away.

 The question is how do we know the distance of these stars and galaxies? For objects that are very close by, we can use a concept called trigonometric parallax. When you place your thumb and close your left eye and then, open your left eye and close your right eye. It will look like your thumb has moved, while more distant objects have remained in place. This same concept applies in measuring distant stars. But they are much farther than the length of your arm, and earth is not large enough, even if you had different telescopes across the equator, you would not see much of a shift in position. So we look at the change in the star’s apparent location over six months, when we measure the relative positions of the stars in summer, and then again in winter, nearby stars seem to have moved against the background of the more distant stars and galaxies.

 But this method only works for objects less than a few thousand light years away. So, for such distances, we use a different method using indicators called standard candles. Standard candles are objects whose intrinsic brightness, or luminosity that we know well. For example, if you know how bright your light bulb is, even when you move away from it, you can find the distance by comparing the amount of light you received to the intrinsic brightness. In astronomy, we consider this as a special type of star called a Cepheid variable. These stars will constantly contract and expand. Because of this, their brightness varies. We can calculate the luminosity by measuring the period of this cycle, with more luminous stars changing more slowly. By comparing the light that we received to the intrinsic brightness we can calculate the distance.

The Type 1a Supernovae – Death of a star

 But we can only observe individual stars up to about 40 million light years away. So we have to use another type of standard candle called type 1a supernova. Supernovae are giant stellar explosions which is one of the ways that stars die. These explosions are so bright, that they outshine the galaxies where they occur. So we can use the type 1 a supernovae as standard candles. Because, intrinsically bright ones fade slower than fainter ones. With the understanding of brightness and decline rate, we can use the supernovae to probe distances up to several billions of light years away. But is the importance of seeing distant objects? Well, the light emitted by the sun will take eight minutes to reach us, which means that the light we see now is a picture of the sun eight minutes ago. And the galaxies are million light years away. It has taken millions of years for that light to reach us. So the universe is in some kind of an inbuilt time machine. The further we can look back, the younger we are probing. Astrophysicists try to read the history of the universe, and understand how and where we come from.

“Dream in light years, challenge miles, walk step by step”William Shakespeare

How powerful is a hydrogen bomb? And how it works?

At 8:15 on the morning of 6th august 1945, all people saw was a blinding light followed by complete darkness and destruction. It was the most powerful weapon ever created by mankind. It unleashed energy and radiation that killed a hundred and forty thousand people in the industrial city of Hiroshima, Japan. Today we have thermonuclear weapons, also called as the hydrogen bomb. Edward Teller, a Hungarian physicist, worked on the Manhattan project to produce the first atomic bomb based uranium fission, teller had long been interested in a hydrogen fusion bomb, but secrecy and the lack of access to computers contributed to slow progress. Stanislaw Ulam, a polish mathematician realized that a fission bomb could be used as a trigger for a fusion reaction. It is believed that teller seized on this for what became, in 1951, the “Teller-Ulam” design. Most sources agree that the H-bomb works in a series of stages, occurring in microseconds, one after the other. A narrow metal case houses two nuclear devices separated by polystyrene foam. One is ball shaped, the other is cylindrical. The ball is essentially a standard atomic fission bomb. When this is detonated, high energy radiation rushes out ahead of the blast.

The first H-Bomb test took place on November 1, 1952 on the small Pacific island of Elugelab.

How a hydrogen bomb works?

The first hydrogen bomb released the energy equivalent of 10 million tons of TNT. While the atomic bomb works on the principle of releasing energy through splitting of atoms called fission, a hydrogen bomb works by fusion of atoms together and it produce more energy than the atom bomb. Fusion is more powerful than fission. It is the same process that powers our sun. And when fission is combined with fusion in hydrogen, it creates energy orders of magnitude higher than fission alone which makes the hydrogen bomb hundreds to thousands of times more powerful than atomic bombs. The fusion portion of the bomb creates energy by combining two isotopes of hydrogen called deuterium and tritium to create helium. Unlike a natural hydrogen atom that is made of one electron orbiting around one proton, these isotopes have extra neutrons in the nuclei. A large amount of energy is released when these two isotopes fuse together to form helium, because a helium atom has much than these two isotopes combined. This excess energy is released. One of the main problems with creating the hydrogen bomb was obtaining the tritium. Scientists found that they can generate this inside the hydrogen bomb with a compound combining lithium and deuterium.

Scientist chose hydrogen for fusion, because it has only one proton and thus would have less electrical charge than atoms with multiple protons in their nuclei. It is possible to combine nuclei when the temperature is increased. Temperatures needed are astronomically higher than ever that at the center of our sun – 100 million degree Celsius. The center of the sun is 15 million degrees.  At this temperature the isotopes become a form of matter called plasma. Now the electrons orbiting are stripped away from the nucleus. At this temperature the nuclei combined with each other and form a helium nucleus and a free neutron. But how is a temperature of 100 million degrees achieved? This is where the fission or atomic bomb is inside the hydrogen bomb enclosures comes into play. This fission provides the energy needed to heat up the fusion reaction. A hydrogen bomb is actually three bombs in one. It contains an ordinary chemical bomb, a fission bomb and the fusion bomb. The chemical bomb initiates the fission bomb, which initiates the fusion bomb. All these events happens in only about 600 billionths of a second, 550 billionths of a second for the fission bomb implosion, and 50 billionths of a second for the fusion bomb. The result is an immense explosion with a 10 million ton yield, 700 times more powerful than an atom bomb. Only six countries have such bombs, china, France, India, Russia, the United Kingdom, and the United States. The world now has over 10,000 such bombs capable of easily destroying every single person on earth many times over.

“I don’t know what weapons countries might use to fight world war III, but wars after that will be fought with sticks and stones”. – Albert Einstein

The Atomic Bomb – How it changed the course of history?

During the World War II the united sates used an unprecedented $2 billion to feed an ultra-secret research and development program, the outcome of which would alter the relationships of nations forever. Known as the Manhattan project, it was the search by the United States and her closest allies to create a practical atomic bomb. It is a single device which capable of mass destruction, the threat of which alone could be powerful enough to end the war. The motivation was simple. Scientists escaping the Nazi regime had revealed that research in Germany had confirmed the theoretical viability of atomic bombs. In 1939, in support of their fears that the Nazis might now be developing such a weapon, Albert Einstein and others wrote to President Franklin D. Roosevelt (FDR) warning of the need for atomic research. By 1941 FDR had authorized formal, coordinated scientific research into such a device. Among those efforts would ultimately unleash the power of the atom was Robert Oppenheimer, who was appointed the project’s scientific director in 1942. Under his direction the famous laboratories at Los Alamos would be constructed and the scientific team assembled. On July 16 1945, in a small town called Alamogordo, New Mexico, the course of human history was changed; the first atomic bomb was detonated that day.

The Little Boy- The world’s first atomic bomb detonated at 5:30 A.M on July 16 1945, Los Alamos, New Mexico

Principle of an atomic bomb

An atom bomb works in the principle that when you break up a nucleus of an atom, a large amount of energy is released. Because it takes a large amount of energy to keep the nucleus bound together. When you split it apart, the energy is released. Scientists chose the biggest and heaviest nucleus that is found in nature to be the best object for splitting. It is uranium, it is unique in that one of its isotopes is the only naturally occurring element on that is capable of sustaining a nuclear fission reaction. A uranium atom has 92 protons and 146 neurons together to give an atomic mass of 238 or U238. A very small portion of uranium, when it is mine, is in the form of an isotope U235, this isotope  has the same 92 protons but only 143 neutrons, or three fewer than U238. U235 is highly unstable, which makes it highly fissionable. When uranium U235 is slammed by a neutron, it becomes uranium 236. In the process of splitting and creating two more stable atoms, a whole bunch of energy is released, along with three more neutrons. These three more neutrons fly out and slam more U235 atoms. And thus, a chain reaction occurs, causing more and more U235 to be split, and ultimately causes a huge explosion. The uranium contains only 0.7% of this U235 isotope, and a whole bunch of it is needed to make one atomic bomb.

Another engineering challenge is to create a vessel with the correct shape and material to contain the neutrons after fissioning, so that they do not escape, but rather cause more atoms to fission. And it is lined with a special mirror so that it forces neutrons back in to the fissionable material rather than escape the vessel. Then the correct amount of fissionable material has to be placed inside this vessel. This is called ‘super critical mass’. There has to be enough mass to sustain an uncontrollable chain reaction resulting in an explosion.  The super critical mass has to be kept apart until you are ready for an explosion. Otherwise an explosion can occur when you don’t want it. The reason is because these isotopes are unstable, and are throwing off neurons randomly. In an atomic bomb, two sub-critical masses are slammed together usually with a conventional bomb contained inside the outer bomb. This conventional explosive charge initiates the chain reaction. This project ultimately created the first, man-made nuclear explosion, which Robert Oppenheimer called “trinity” on July 16, 1945. The concept of an atom bomb is simple but, the process of actually creating a bomb is not so simple.

“Now I am became Death, the destroyer of worlds.” – J. Robert Oppenheimer

3 Great inventions with simple engineering techniques – Catseyes, Fountain pen, Safety pin

Cats eyes – Road Reflectors

When you are driving at night, you see reflective objects on road. These objects reflect the light and guide people to drive safely. It may seem so simple, but the idea of this invention came in an interesting way. One night in 1933 when the road mender Percy Shaw was driving home in Yorkshire, he saw the light of his car headlamps reflected in the eyes of a cat beside the road. This gave Shaw the inspiration that by replicating this effect he could produce a practical way of helping drivers navigate poorly lit roads. Shaw’s challenge was to create a device bright enough to illuminate roads at night, robust enough to cope with cars constantly driving across it, and that also required minimum maintenance. Shaw came up with a small device that could be inserted into the road as a marker. It consisted of fur glass beads placed in two pairs facing in opposite directions, embedded in a flexible rubber dome. When vehicles drove over the dome, the rubber contracted and the glass beads dropped safely beneath the road surface. The device was even self-cleaning. The cast-iron base collected rainwater and whenever the top of the dome was depressed, the rubber would wash the water across the glass beads to cleanse away any grime, just as the eye is cleaned by tears. The patent for the catseye was registered in 1934. And in 2001 the product was voted the greatest design of the twentieth century, ahead even of Concorde.

Percy Shaw – Catseyes in 1934

Fountain pen

The invention of the modern fountain pen is really more a story of perfection than invention. In 1883, more than fifty years after the fountain pen was first invented, a New York insurance broker, Lewis waterman, was set to sign an important contract and decided to honor the occasion by using the standard ink-filled pen of the day. However, fountain pens were notoriously unreliable, especially in their capacity to regulate their ink low, so that it could not be signed, waterman decided to do something about it. Within a year Lewis waterman had designed the world’s first practical, usable, and virtually leak proof fountain pen. To regulate the flow of ink he successfully applied the principle of capillary action, with the inclusion of a tiny air hole in the nib of the along with grooves in the feeder mechanism to control the flow of ink from his new leak proof reservoir to the rib.

As early as the beginning of the eighteenth century, the chief instrument-marker to the king of France, M.Bion, crafted fountain pens with nibs, five of which survive to this day. The first steel pen point was manufactured in 1828, thought to be invented by Petrache Poenaru, and in the 1830s the invented James Perry had several unsuccessful attempts at designing nibs that employed the principle of capillary action. But it was Lewis waterman who overcome every obstacle and crafted a successful pen. It was so successful that by 1901, two years after waterman’s death, more than 350,000 pens of his design were sold worldwide.

Safety pin

When it comes to simple engineering, we can’t avoid safety pin. This useful object is found in households across the globe, it even gained status as a fashion accessory, with the movement in 1970s. Walter Hunt was a New York mechanic who, in 1849, sat wondering how to pay off a $15 loan. He spent around three hours twisting a length of wire in his fingers before he created the answers to his problems, the humble safety pin. Pins were by no means a new idea, having existed for centuries before Walter’s twist on the design. However, his creation was unique as it provided a solution to the potential problem of pricking oneself with the old style variety. His pin has a clip at the top which locks the pin and keeps us safe from not pricking. At the bottom it has a spring like structure made by bending the same pin to maintain the tension of the pin. Hunts design was patented in April 1849, and he sold the rights to his creditor, clearing a $385 profit. Unfortunately hunt had no idea how popular his invention was set to become. Even after 150 years, we are using this safety pin which works on a very simple engineering.  He also designed America’s first sewing machine with an eye pointed needle. But fearing the loss of jobs his creation may cause, he did not patent the idea. It was left to a fellow American, Elias Howe, to claim the credit for this invention some twenty years later.

“A man who could invent a safety pin . .  . was truly a mechanical genius . . .” – New York Times

Evolution of motorcars mechanism

It is difficult, to imagine a world without the motorcar. Back in the 1700s, some of the very first cars were powered by stream engines. When German engineer Karl Benz drove a motorcar tricycle I 1885 and fellow Germans Gottlieb Daimler and Wilhelm Maybach converted a horse down carriage into a four wheeled motorcar in august 1886, none of them could have imagined the effects of their invention. Benz recognized the great potential of petrol as a fuel. His three wheeled car had a top speed of just ten miles (16 km) per hour with its four-stroke, one cylinder engine. After receiving his patent in January 1886, he began selling the Benz Velo, but the public doubted its reliability. Benz’s wife Bertha had a brilliant idea to advertise the new car. In 1886 she took it on a 60 mile (100 km) trip from Mannheim to near Stuttgart. Despite having to push the car up hills, the success of the journey proved to a skeptical public that this was a reliable mode of transport.

Daimler and Maybach did not produce commercially feasible cars until 1889. Initially the German inventions did not meet with much demand, and it was French companies like Panhard at Levassor that redesigned and popularized the automobile. In 1926 Benz’s company merged to form the Daimler Benz company. Benz had left his company in 906 and, remarkably, he and Daimler never met. Due to higher incomes and cheaper, mass produced cars, the United States led in terms of motorization for much of the twentieth century. This kind of movement has, however, come at a cost. Some 25 million people are estimation to have died in car accidents worldwide during the twentieth century. Climate changing exhaust gases and suburban sprawl are but two more of the consequences of a heavy reliance on the automobile.

Karl Benz with his wife Bertha, the first motor car (1885)

Invention of the clutch

Almost all historians agree that clutch was developed in Germany in the 1880s. Daimler met Maybach while they were working for Nikolaus Otto, the inventor of the internal combustion engine. In 1882 the two set up their own company, and from 1885 to 1886 they built a four-wheeled vehicle with a petrol engine and multiple gears. The gears were external, however, and engaged by winding belts over pulleys to drive each selected gear. In 1889, they developed a closed four- speed gearbox and a friction clutch to powers the gears, this car was the first to be marketed by the Daimler motor campy in 1890. Without a clutch, if the car engine is running the wheels keep turning. For the car to stop without stalling, the wheels and engine must be separated by a clutch. A friction clutch consists of a flywheel mounted to engine side. The clutch originates from the drive shaft and is a large metal plate covered with a frictional material. When the flywheel and clutch make contract, power is then transmitted to the wheels.

Patent drawings of first motorcar – Karl Benz

Gears in Motorcars

Karl Benz was the first to add a second gear to his machine and also invented the gear shift to transfer between the two. The suggestion for this additional gear came from Benz’s wife, Bertha, who drove the three-wheeled Motorwagen 65 miles from Mannheim to Pforzheim – the first long distance automobile trip. The gears allow the engine to the maintained at its most efficient rpm while altering the relative speed of the drive shaft to the wheels. Gears originally required double clutching, where the clutch had to be depressed to disengage the first gear from the drive shaft, and then released to allow the correct rpm for the new gear to be selected. The clutch was then pressed again to engage the drives shaft with the new gear. Modern cars use synchronized which use friction to match the speeds of the new gear and he shaft before the teeth of the gears engage, meaning that the clutch only needs to be presses once.

“One thing I feel most passionately about: love of invention will never die” – Karl Benz

Why Higgs Boson is called as the ‘God Particle’?

By Shashikant Nishant Sharma

In 1964 peter Higgs with five scientists proposed a theory called the Higgs mechanism to explain the existence of mass in the universe. Before 1930s, atoms were considered as the fundamental particles. Then we found electron, protons and neutrons as atomic particles. Later we found that protons and neutrons are made up of even more small fundamental particles called quarks. Quarks are the fundamental building blocks for the whole universe. The key evidence for the existence of these elementary particles came from a series of inelastic electron-nucleon scattering experiments conducted between 1967 and 1973 at the Stanford linear accelerator center. They are commonly found in protons and neutrons. There are six types of quarks, up quark, down quark, top quark, bottom quark, strange quark, charm quark. They can have positive (+) or negative (-) electric charge. Up, charm and top quarks have a positive 2/3 charge. Down, strange, bottom quarks have a negative 1/3 charge. So protons are positive because there are two quarks (+2/3) ups and one down quark (-1/3), giving a net positive charge (+2/3+2/3-1/3 =1). These three quarks are known as valence quarks, but the proton could have an additional up quark and anti-up quark pair.

The Higgs field theory

In the second half of the 20th century, physicists made a developed a theory called a standard model of particle physics. They theorized about twelve fundamental particles that make up all matter, and four particles called bosons are responsible for three fundamental forces of nature. It includes strong force, weak force, and electromagnetism. Gravity is another force, it is not a part of this model but, it can be modeled using general relativity. With these fundamental particles in the standard model and gravity, we can build almost everything in the entire universe. However until 2012, the standard model was an underlying theory. Because all forces carrying particles should be massless. So, although the photons are massless, experiments show that the weak forces bosons have mass. So that was a promising model that could be used to explain our universe. But perhaps, it would need to be thrown out because it had the seemingly fatal flaw in being inconsistent regarding the way the weak force worked in the late 1950s physicists had no idea to resolve these issues all attempts to solve this problem. But indeed it created new theoretical problems. In 1964, Peter Higgs hypothesized that perhaps the force articles were massless but gained mass when they interacted with an energy field that is the reason for the existence of the entire universe.

During the very early moments following the big bang, in the universe, the elementary particles were massless and they were pure streams of energy that move at the speed of light. As the expansion of the universe was proceeding, density and temperature decreased below a certain key value. According to the theory, the Higgs field interacts with particles and can give them mass. It is theorized that different particles interact differently with the field, the particles that interact with it more intensely have greater mass and particles that don’t interact with it that much have lower mass. Just imagine Higgs field as water, pointed shape objects interact lesser with water and cube shaped objects interact more with it. Some particles don’t interact with the field like photons are massless. A fundamental part of the theory was the presence of a specific particle; it’s called the Higgs boson. A boson that would allow the Higgs mechanism to unfold correctly to give mass to all other particles.

The Higgs Boson – CMS experiment

CERN’s discovery of a new particle

Even though Higgs theorized it, scientists can’t able to prove that until 2012. The particle accelerators had to possess a huge amount of energy to detect them. Finally, the Large Hadron Collider (LHC), the CERN’s particle accelerator has been turned on in 2008 and managed to recreate the required energy and temperature conditions in 2012. The Higgs boson was finally experimentally detected and on 4th July, a conference held in the CERN auditorium announced the discovery of a particle compatible with the Higgs boson. The machine accelerates Hadron bundles at close to the speed of light and collides them each other in opposite directions. At four separate points the two beams cross, causing protons to smash into each other at enormous energies, with their destruction being witnessed by super-sensitive instruments. Even if LHC is the world’s largest particle accelerator, it had to work hard to detect Higgs boson. If the Higgs field doesn’t exist, all particles in the universe will become absolutely weightless and fly around the universe in the speed of light. For This reason Higgs boson is often called as the ‘God particle’.

“I never expected this to happen in my lifetime and shall be asking my family to put champagne in the fridge.”Peter Higgs

How waves differ from tides? why do they occur?

A wave begins as the wind ruffles the surface of the ocean. When the ocean is calm and glass like, even the mildest breeze forms ripples, the smallest type of wave. Ripples provide surfaces for wind to act on, which produces larger waves. Stronger winds push the nascent waves into steeper and higher hills of water. The size a wave reaches depends on the speed and strength of the wind. The length of time it takes for the wave to form, and the distance over which it blows in the open ocean is known as the fetch. A long fetch accompanied by strong and study winds can produce enormous waves. The highest point of a wave is called the crest and the lowest point the trough. The distance from one crest to another is known as the wavelength.

Although water appears to move forward with the waves, for the most part water particles travel in circles within the waves. The visible movement is the wave’s form and energy moving through the water, courtesy of energy provided by the wind. Wave speed also varies; on average waves travel about 20 to 50 Mph. Ocean waves vary greatly in height from crest to trough, averaging 5 to 10 feet. Storm waves may tower 50 to 70 feet or more. The biggest wave that was ever recorded by humans was in Lituya bay on July 9th, 1958. Lituya bay sits on the southeast side of Alaska. A massive earthquake during the time would trigger a mega tsunami and the tallest tsunami in modern times. As a wave enters shallow water and nears the shore, it’s up and down movement is disrupted and it slows down. The crest grows higher and be gins to surge ahead of  the rest of the wave, eventually toppling over and breaking apart. The energy released by a breaking wave can be explosive. Breakers can wear down rocky coast and also build up sandy beaches.

At Nazare ,the Brazilian Rodrigo Koxa holds the record for the biggest wave ever surfed (80ft).

Why does a tide occur?

Tides are the regular daily rise and fall of ocean waters. Twice each day in most locations, water rises up over the shore until it reaches its highest level, or high tide. In between, the water recedes from the shore until it reaches its lowest level, or low tide. Tides respond to the gravitational pull of the moon and sun. Gravitational pull has little effect on the solid and inflexible land, but the fluid oceans react strongly. Because the moon is closer, its pull is greater, making it the dominant force in tide formation.

Gravitational pull is greatest on the side of earth facing the moon and weakest on the side opposite to the moon. Nonetheless, the difference in these forces, in combination with earth’s rotation and other factors, allows the oceans to bulge outward on each side, creating high tides. The sides of earth that are not in alignment with the moon experience low tides at this time. Tides follow different patterns, depending on the shape of the seacoast and the ocean floor.  In Nova Scotia, water at high tide can rise more than 50 feet higher than the low tide level. They tend to roll in gently on wide, open beaches in confined spaces, such as a narrow inlet or bay, the water may rise to very high levels at high tide.

There are typically two spring tides and two narrow tides each month. Spring tide of great range than the mean range, the water level rises and falls to the greatest extend from the mean tide level. Spring tides occur about every two weeks, when the moon is full or new. Tides are at their maximum when the moon and the sun are in the same place as the earth. In a semi-diurnal cycle the high and low tides occur around 6 hours and 12.5 minutes apart. The same tidal forces that cause tides in the oceans affect the solid earth causing it to change shape by a few inches.

How optics changed the world?

The formal study of light began as an effort to explain vision. Early Greek thinkers associated with a ray emitted from the human eye. A surviving work from Euclid, the Greek geometrician, laid out basic concepts of perspective, using straight lines to show why objects at a distance appear shorter or slower than they actually are. Eleventh-century Islamic scholar Abu Ali al Hasan Ibn Al-Haytham known also by the Latinized name Alhazen revisited the work done by Euclid and Ptolemy and advanced the study of reflection, refraction, and color. He argued that light moves out in all directions from illuminated objects and that vision results when light enters the eye. In the late 16th and 17th centuries, researches including Dutch mathematician Willebrord Snel noticed that light bent as it passed through a lens or fluid. Although he believed the speed of light to be infinite, Danish astronomer Ole Romar in 1676 used telescopic observations of Jupiter moons to estimate the speed of light as 140,000 miles a second. Around the same time, Sir Isaac Newton used prisms to demonstrate that white light could be separated into a spectrum of basics colors. He believed that light was made of particles, where as Dutch mathematician Christiaan Huygens described light as a wave.

The particle versus the wave debate advanced in the 1800s. English physician Thomas young’s experiments with vision suggested wavelike behavior, since sources of light seemed to cancel out or reinforce each other. Scottish physicist James Clerk Maxwell’s research united the forces of electromagnetism fell along a single spectrum. Te arrival of quantum physics in late 19th and early 20th century prompted the next leap in understanding light. By studying the emission of electrons from a grid hit by a beam of light known as the photoelectric effect Albert Einstein concluded that light came from what he called photons, emitted as electrons changed their orbit around an atomic nucleus and then jumped back to their original state. Through Einstein’s finding seemed to favor the particle theory of light, further experiments showed that light and matter itself behave both as waves and as particles.

How do lasers works?

Einstein’s work on the photoelectric effect led to the laser, an acronym for “light amplification by stimulated emission radiation.” As electrons are exited from one quantum state to another, they emit a single photon when jumping back. But Einstein predicted that when an already excited atom was hit with the right type of stimulus, it would give off two identical photons. Subsequent experiments showed that certain source materials, such as ruby, not only did that but also emitted photons that were perfectly coherent-not scattered like the emissions of a flashlight, but all of the same wavelength and amplitude. These powerfully focused beams are now common-place, found in grocery store scanners, handheld pointers, and cutting instruments from the hospital operating room to the shop floors of heavy industry.

Future trends in fiber optics communication

Fiber optics communication is definitely the future of data communication. The evolution of fiber optic communication has been driven by advancement in technology and increased demand for fiber optic communication. It is expected to continue into the future, with the development of new and more advanced communication technology.

Another future trend will be the extension of present semiconductor lasers to a wider variety of lasing wavelengths. Shorter wavelength lasers with very high input powers are of interest in some high density optical applications. Presently, laser sources which are spectral shaped through chirp managing to compensate for chromatic dispersion are available. Chirp managing means that the laser is controlled such that it undergoes a sudden change in its wavelength when firing a pulse, such that the chromatic dispersion experienced by the pulse is reduced. There is need to develop instruments to be used to characterize such lasers. Also, single mode tunable lasers are of great importance for future coherent optical systems. These tunable lasers laser in a single longitudinal mode that can be tuned to a range of different frequencies.

“Music is the arithmetic of sounds as optics is the geometry of light.” – Claude Debussy

Large Hadron Collider-the world’s largest machine

The smallest thing that we can see with a light microscope is about 500 nano-meters. A typical atom is anywhere from 0.1 to 0.5 nano-meters in diameter. So we need an electron microscope to measure these atoms. The electron microscope was invented in 1931. Beams of electrons are focused on a sample. When they hit it, they are scattered, and this scattering is used to recreate an image. Then what about protons or neutrons? Or what about quarks? The quarks are the most fundamental building blocks of matter. So how did we find such small particles exist? The answer is a particle collider. A particle collider is a tool used to accelerate two beams of particles to collide since 1960s.

The largest machine built by man, the Large Hadron Collider (LHC) is a particle accelerator occupying an enormous circular tunnel of 27 kilometres in circumference, ranging from 165 to 575 feet below ground. It was situated near Genoa, Switzerland. It is so large that over the course of its circumference crosses the border between France and Switzerland. That’s the giant collaboration going on between over 100 countries and 10,000 scientists. The tunnel itself was constructed between 1983 and 1988 to house another particle accelerator, the Large Hadron Collider, which operated until 2000, its replacement, the LHC, was approved in 1995, and was finally switched on in September 2008.

The Larger Hadron Collider (LHC) covers the circumference of 27 kilometres

Working of the Large Hadron Collider

 The LHC is the most powerful particle accelerator ever built and has designed to explore the limits of what physicists refer to as the standard Model, which deals with fundamental sub-atomic particles. There are two vacuum pipes are installed inside the tunnel which intersects in some places and 1,232 main magnets are connected to the pipe. For proper operation, the collider magnets need to be cooled to -271.3 °C. To attain this temperature, 120 tons of liquid helium is poured into the LHC. These powerful magnets can accelerate protons near the speed of light, so they can complete a circuit in less than 90 millionths of a second. Two beams operate in opposite directions around the ring. At four separate points the two beams cross, causing protons to smash into each other at enormous energies, with their destruction being witnessed by super-sensitive instruments. But it’s not that easy to do this experiment. Each beam consists of bunches of protons and most of the protons just miss each other and carry on around the ring and do they it again. Because, atoms are mostly empty space, so getting them to collide is incredibly difficult. It’s like colliding a needle into a needle, provided that the distance between them is 10 kilometres.

Collision of protons at near the speed of light

The aim of these collisions is to produce countless new particles that stimulate, on a micro scale, some of the conditions postulated in the Big Bang at the birth of the universe. Higgs Boson was discovered with the help of LHC. This so called ‘God Particle’ that could be responsible for the very existence of mass. If it disappeared, all particles in the universe will become absolutely weightless and fly around the universe in the speed of light (299,792,458 m/s). that means we can reach our moon in 1.3 seconds from earth.

“When you look at a vacuum in a quantum theory of fields, it isn’t exactly nothing.”Peter Higgs