The 5 Different Types of Copywriting You Can See Today

In the marketing and advertising sector, the text that forms a part of an Ad is known as its copy. Copywriting is the process of writing this text. Copywriting may be found on paid Ads, brochures and even on Website pages. The primary objective is what differentiates copywriting from content writing or content marketing. While the goal of content marketing could range from education to entertainment, and awareness, copywriting is mainly focused on making sales, getting people to talk about the brand, and taking the desired actions. The copy should communicate the features, price and values of the product in the quickest way possible.

Here are the different forms of copywriting an aspiring copywriter could get into:

Brand Copywriting

Also called creative copywriting, brand copywriting deals with creating copies that distinguish a brand or company. It is aimed at getting people to recognize a brand with their unique copy, developing strong emotional attachments. Brand copywriters create copy for commercials, posters and jingles for brands. Some popular examples include Nike’s tagline – “Just Do It”, and popular jingles like “I’m lovin’ it” by McDonald’s. It aids in developing brand recognition and brand memory rather than just competing with other brands.

SEO Copywriting

In this age of 24/7 internet and eCommerce, SEO (Search Engine Optimization) is a must-have element. SEO copywriting is all about creating copies that help you get ranked highly on search engines. The copy should be attractive, offering value to readers while also mixing in essential keywords and phrases. A healthy amount of keywords ensure that your blog or web page is visible quickly on search engines, driving traffic to your work. Blog posts, copies of web pages (home page, landing page, etc), articles and product descriptions are forms of SEO copywriting.

Social Media Copywriting

This form of copywriting deals with creating attractive posts for brands on social media. A brand can have profiles on various social media platforms like Facebook, Twitter, Instagram, TikTok and LinkedIn. A social media copywriter attempts to engage its customers through its content on these pages. The aim is to make the brand memorable. The copy would have to suit the style and tone of the social media platform it is posted in. This means that a copy would look different on Facebook and Instagram. It helps reach out to potential consumers and get them to visit your store.

Starbucks on Instagram

Technical Copywriting:

As the name mentions, technical copywriting is all about creating copies that explain information related to your brand or product. The copy goes in-depth in providing useful insight to customers regarding topics like user manuals and product descriptions. This helps in building trust in the brand. Contrary to other forms of copywriting, technical copywriting tends to be more detailed and longer, usually seen in blog posts, user guides and white papers. The tricky part of it is getting the copy to be insightful while also making it simple and easy to understand. You need to have some technical experience and good explanation skills while also using simple language without jargon.

Public Relations Copywriting:

PR copywriters write copies that get your brand noticed by news reporters and journalists, so that they can spread it across various media. This type of copywriting is useful once a brand is established and you want people to remember the brand. Instead of sharing how good your brand is yourself, you get others to talk about your brand. This is generally considered to be unbiased and credible by the viewers because it would be verified and reported by a third party. Press releases and statements of brands and companies you see on television channels and social media are PR copywriting. It can be used to improve a brand’s reputation or as damage control, after something had occurred that could affect the brand’s image negatively.

The success story of Space X – from Falcon 1 to Starship

The Falcon super heavy launch vehicle was designed to transport people, spaceships, and various cargos into space. Such a powerful unit wasn’t created instantly and it also had its predecessors. The history of the Falcon family of vehicles began with the creation of the Falcon 1- a lightweight launch vehicle with a length of 21.3 meters and a diameter of 1.7 meters and a launch mass of 27.6 tones; the rocket could carry 420 kilograms or 926 pounds of payload on board. It became the first private device that was able to bring cargo into low earth orbit. Construction of the Falcon 1 of only two stages, the first of them consisted of a supporting element with fuel tanks, an engine and a parachute system. They chose kerosene as the fuel and liquid oxygen became its oxidizing agent.

The second stage also contains fuel tanks and an engine; though the latter had less thrust compared to the one in the first stage despite the huge launch cost $7.9 million. Totally five attempts were made to send the Falcon 1 beyond the of our planet nut not all of them were successful. During the debut launch of the rocket, a fire started in the first stage engine; this led to a loss of pressure which caused the engine to shut down in the 34th second of flight. The second attempt to start the Falcon 1 incurred a problem with the fuel system of the second stage fuels stopped flowing into its engine at 474 second of flight it shut down as well. The third time of the Falcon 1 went on a flight, it wasn’t alone of the serious cargo the rocket carried onboard the trailblazer satellites and to NASA microsatellites. In phase one with the first stage he flight went normally but when the time came to separate the stages, the first hit the second when it started engine, so the second stage couldn’t continue its flight.

 The fourth and fifth launches shoed good results but that wasn’t enough. The main problem with Falcon 1 was low demand due to its low payload abilities. For this reason, they designed Falcon 9; this device can carry on onboard 23 tons of cargo. It’s also a two stage launch vehicle and uses kerosene and l liquid oxygen as fuel. The device is currently in operation and the cost of its launch is equal to $62 million. The first stage of the rocket is reusable; it can return to earth and can be used again. The Falcon 9 is designed to not only launch commercial communication satieties but also to deliver dragon 1 to the ISS. Dragon 1 can carry a six ton payload from the earth, this drone supplies the ISS with everything they needs and it also takes goods back.

The dragon 2 is designed to deliver a crew of four people to the ISS and back to earth. Now there is an ultra heavy launch vehicle with a payload capacity of almost 64 tones. It is the most powerful and heavier device called the Falcon heavy. This rocket was first launched on February 6th 2018 and the test was successful. The rocket sent Elon Musk’s car into space- a red Tesla Roadster. After this debut subsequent launches were also conducted without problem. The launch cost is estimated to $150 million.

The first stage of the Falcon heavy consists f three parts. There are three blocks contain 27 incredibly powerful engines in nine each one. The thrust created when takeoff is comparable to 18 Boeing 747s at full power. The second stage is equipped with a single engine. It is planned that the device would be used for missions to the moon and mars. Currently, SpaceX working on the starship manned spacecraft.  According to its creators, this device will be much larger and heavier than all of the company’s existing rockets. This device will able to deliver cargo into space weighing more than a hundred tons. The launch of starship into pace is planned for 2022 to mars with a payload. Who knows, one of the mankind’s largest dreams may come true within the next year.

Magnetic resonance imaging (MRI) – History, Unknown facts

    The MRI (Magnetic resonance imaging) scan is a medical imaging procedure that uses a magnetic field and radio waves to take pictures of our body’s interior. It is mainly used to investigate or diagnose the conditions that affect soft tissue such as tumors or brain disorders. The MRI scanner is a complicated piece of equipment that is expensive to use and found only in specialized centers. Although Raymond Vahan Damadian (1936) is credited with the idea of turning nuclear magnetic resonance to look inside the human body, it was Paul Lauterbur (1929-2007) and Peter Mansfield (1933) who carried out the work most strongly linked to Magnetic resonance imaging (MRI) technology. The technique makes use of hydrogen atoms resonating when bombarded with magnetic energy. MRI provides three dimensional images without harmful radiation and offers more detail than older techniques.

       While training as a doctor in New York, Damadian started investigating living cells with a nuclear magnetic resonance machine. In 1971 he found that the signals carried on for longer with cells from tumors than from healthy ones. But the methods used at this time were neither effective nor practical although Damadian received a patent for such a machine to be used by doctors to pick up cancer cells in 1974.

       The real shift came when Lauterbur, a U.S, chemist, introduced gradients to the magnetic field so that the origin of radio waves from the nuclei of the scanned object could be worked out. Through this he created the first MRI images in two and here dimensions. Mansfield, a physicist from England, came up with a mathematical technique that would speed up scanning and make clearer images. Damadian went on to build the full body MRI machine in 1977 and he produced the first full MRI scan of the heart, lungs, and chest wall of his skinny graduate student, Larry Minkoff – although in a very different way to modern imaging.

Working of an MRI machine

        The key components of an MRI machine are magnet, radio waves, gradient, and a super advanced computer. We all know that human bodies are made up of 60% water, and water is magnetic. Each of the billons of water molecules inside us consists of an oxygen atom bonded to two hydrogen atoms that are called as H2O. Small parts of the hydrogen atoms act as tiny magnets and are very sensitive to magnetic fields. The first step in taking an MRI scan is to use a big magnet to produce a unified magnetic field around the patient. The gradient adjusts the magnetic field into smaller sections of different magnetic strengths to isolate our body parts. Take brain as an example, normally the water molecules inside us are arranged randomly. But when we lie inside the magnetic field, most of our water molecules move at the same rhythm or frequency as the magnetic field. The ones that don’t move along the magnetic field are called low energy water molecules. To create an image of a body part, the machine focuses on the low energy molecules. The radio waves move at the same rhythm or frequency as the magnetic fields in an MRI machine.

       By sending radio waves that match or resonate with the magnetic field, the low energy water molecules absorb the energy they need to move alongside the magnetic field. When the machine stops emitting radio waves, the water molecules that had just moved along the magnetic field release the energy they had absorbed and go back to their position. This movement is detected by the MRI machine and the signal is sent to a powerful computer which uses imaging software to translate the information into an image of the body. By taking images of the body in each section of the magnetic field the machine produces a final three dimensional image of the organ which doctors can analyze to make a diagnosis.

“Medicine is a science of uncertainty and an art of probability”. –William Osler

 

 

Evolution of Art – Origin, Milestone and Masterpiece

Expressing oneself through art seems a universal human impulse, while the style of that expression is one of the distinguishing marks of a culture. As difficult as it to define, art typically involves a skilled, imaginative creator, whose creation is pleasing to the senses and often symbolically significant or useful. Art can be verbal, as in poetry, storytelling or literature or can take the form of music and dance. The oldest stories, passed down orally may be lost to us now, but thanks to writing, tales such as the epic of Gilgamesh or the Lliad entered the record and still hold meaning today. Visual art dates back 30,000 years, when Paleolithic humans decorated themselves with beads and shells. Then as now, skilled artisans often mixed aesthetic effect with symbolic meaning.

In an existence that centered on hunting, ancient Australians carved animal and bird tracks into their rocks. Early cave artists in Lascaux, France, painted or engraved more than 2,000 real and mythical animals. Ancient Africans created stirring masks, highly stylized depictions of animals and spirits that allow the wearer to embody the spiritual power of those beings. Even when creating tools or kitchen items, people seem unable to resist decorating or shaping them for beauty. Ancient hunters carved the ivory handles of their knives. Ming dynasty ceramists embellished plates with graceful dragons. Modern pueblo Indians incorporates traditional motifs in to their carved and painted pots. The western fine arts tradition values beauty and message. Once heavily influenced by Christianity and classical mythology, painting and sculptures has more recently moved toward personal expression and abstraction.

Humans have probably been molding clay- one of the most widely available materials in the world- since the earliest times. The era of ceramics began, however, only after the discovery of that very high heat renders clay hard enough to be impervious to water. As societies grew more complex and settled, the need for ways to store water, food, and other commodities increased. In Japan, the Jomon people were making ceramics as early as 11,000 B.C. by about the seventh millennium B.C.; kilns were in use in the Middle East and china, achieving temperatures above 1832°F. Mesopotamians were the first to develop true glazes, through the art of glazing arguably reached its highest expression in the celadon and three color glazes of the medieval china. In the new world, although potters never reached the heights of technology seen elsewhere, Moche, Maya, Aztec, and Puebloan artists created a diversity of expressive figurines and glazed vessels.

When Spanish nobleman Marcelino Sanz de Sautuola described the paintings he discovered in a cave in Altamira, contemporizes declared the whole thing a modern fraud. Subsequent finds confirmed the validity of his claims and proved that Paleolithic people were skilled artists. Early artists used stone tools to engrave shapes into walls. They used pigments from hematite, manganese dioxide, and evergreens to achieve red, yelled, brown, and black colors. Brushes were made from feathers, leaves, and animal hair. Artists also used blowpipes to spray paint around hands and stencils.

 

Emotional Intelligence in the Workplace

Some people in our life are appreciated for being very understanding as a friend or colleague, lending an ear, and empathizing with our life events and struggles. Chances are this friend must be having a very high degree of Emotional Intelligence. Emotional Intelligence is the ability to perceive, understand and manage our emotions as well as influence others’ emotions in a positive way. This is different from general intelligence and its Intelligence Quotient (IQ) which represents abilities such as general knowledge, visual and spatial processing, working and short-term memory, and reasoning. While IQ is considered by many to be a decisive factor in achieving success in life, EQ (Emotional Quotient) is also an essential quality in areas including education, management and leadership. In fact, many companies include it in their required criteria, testing the EQ of applicants during their hiring process. A high EQ is considered an important quality for managers and business leaders.

According to psychologist and author, Daniel Goleman, emotional intelligence includes five components. These five components are:

Self-Awareness

A person is said to be self-aware when they are highly aware of their own behaviour, habits and feelings. You can understand your strengths and weaknesses. You are aware of why you feel the way you do and how your actions can impact people around you. Being self-aware helps you in your work as it also keeps you humble and grounded.

Self-Regulation

Self-regulation refers to having good control over one’s actions and decisions. You think and react rationally, giving a lot of thought before making important decisions. As a manager or leader at your workplace, this quality helps manage critical situations and adapt to changes at work. It helps you make correct decisions, considering all possible consequences. You can stay calm, ease tensions and hold yourself accountable, particularly when you receive constructive criticism at work.

Motivation

A motivated individual is goal-oriented, giving it their all to achieve their end-goal. You maintain high standards of quality for your work and remain passionate and driven towards your aim. Self-motivation also means you are working for your personal development rather than material accomplishments like money, fame or status.  

Empathy

Empathy is the ability to understand and share the feelings of someone else. You listen to what others have to say and you can put yourself in their situation to understand what they must have gone through. An empathic person is not too judgmental, a trait that helps them work well with their colleagues by helping them progress and providing constructive criticism.

Social Skills

Social skills are essential for good communication and teamwork. Social skills help you become good listeners, engage verbally and maintain a good rapport with your team-mates. You can take up the leadership role if needed, supporting the whole team and managing conflicts diplomatically.

When a person is able to manage their emotions well and exhibit a high degree of EQ, it also influences people around him/her to act the same. This helps in maintaining positive relationships in the work environment. A well-developed EQ is crucial for achieving your work goals, particularly in group projects. Anybody can develop a good Emotional Quotient with practice and care. Small habits like listening, understanding and reflecting on your actions can help in improving your emotional skills.

Why Companies use UGC

User-Generated Content (UGC) is one of the most influential methods of social media marketing or social commerce. Any content that is created by customers or individuals who are not working for brands, and published on social media is classified as User-Generated Content. UGC can be images, videos, and other social media content. It could also be reviews and testimonials regarding a product. The content created by the customer is then made use of by the brand, making it a part of their marketing strategy.

Brands like GoPro and Apple have used UGC effectively in their social media pages like YouTube and Instagram. The video equipment company shared videos created by its customers using their equipment on YouTube, successfully gaining lots of attention and views. Apple introduced the “Shot on iPhone” campaign some years back, encouraging users to share great photos they captured with the latest model of the iPhone. The best photos were shared by Apple not just on social media, but even on billboards and posters of their Ads outside.

Sites like Amazon and Tripadvisor have also used UGC for a long time, getting customers to post public reviews of a product or service online. These companies try to engage with customers, communicating politely and addressing their reviews and grievances. Such testimonials help build customer trust and give the brand a positive image.

How does UGC help a Brand?

Contributes Authenticity:

There is a greater chance of people believing everyday people who are relatable than a brand about a product. So, when they see a common man recommending a product, they consider it more authentic than brand-created content.

Provides Brand Loyalty:

Involving consumers in the brand’s growth is a great way of deepening the connection between brand and consumer. The customer now feels like they are part of the brand’s community. An engaged community helps build brand loyalty.

Holds People’s Attention:

UGC has turned out to be very efficient in grabbing people’s attention. In most cases, it is even considered more catchy than traditional advertising.

Influences Purchasing Decisions:

When people get to see actual social proof that a particular product is useful and worth buying, it greatly influences their final purchasing decisions. It is seen to be even more impactful than products advertised by celebrities or influencers. UGC also helps increase conversion rates of the audience into potential new customers of the company.

Cost-effective:

UGC is very cost-effective as the brand can rely on the customers for useful content. They do not have to worry about spending money on professionals, studios and equipment. The brand only needs to maintain a healthy relationship with its customers to get positive content for free and without being asked.  

The Kardashev scale – Classifying Alien civilization

The observable universe is consists up to two trillion galaxies that are made of billions and billions of stars. In the Milky Way galaxy alone, scientists assume that there are some 40 billion earths like planets in the habitable zone of their stars. When you look at these numbers, there are a lot of possibilities of alien civilization to exist. In a universe that big and old, the possibilities of civilizations may start millions of years apart from each other, and develop in different directions and speed. So their civilization may range from cavemen to super advanced. We know that human started out with nothing and then making tools, building houses, etc. we know that humans are curios, competitive, greedy for resources, and expansionists. The more of these qualities that our ancestors had, the more successful they were in the civilization building process.

 Like this, the other alien civilizations also must have evolved. Human progress can be measured very precisely by how much energy we extracted from our environment. As our energy consumption grew exponentially, so did the abilities of our civilization. Between 1800 and 2015, population size had increased sevenfold; while humanity was consuming 25 times more energy. It’s likely that this process will continue into the far future. Based on these facts, scientist Nikolai Kardashev developed a method for categorizing civilizations, from cave dwellers to gods ruling over galaxies into a scale called the Kardashev scale. It is a method of ranking civilizations by their energy use. It put civilizations into four categories. A type 1 civilization is able to use the available energy of their home planet. A type 2 civilization is able to use the available energy of their star and planetary system. A type 3 civilization is able to use the available energy of their galaxy. A type 4 civilization is able to use the available energy of multiple galaxies

. It’s like comparing an ant colony to a human metropolitan area. To ants we are so complex and powerful, we might as well be gods. On the lower end of the scale, there are type 0 to type 1 civilization. Anything from hunting, gatherers to something we could achieve in the next few hundred years. These might actually be abundant in the Milky Way. If that possible, why they are not sending any radio signals in space. But even if they transmitted radio signals like we do, it might not be very helpful. In such a vast universe, our signals may extend over 200 light years, but this is only a tiny fraction of the Milky Way. And even if someone were listening, after a few light years our signals decay into noise, impossible to identify as the source of an intelligent species. Today humanity ranks at about level 0.75. We created huge structures, changed the composition and temperature of the atmosphere. If progress continues, we will become a full type 1 civilization in the next few hundred years. The next step to type 2 is trying and mine other planets and bodies.

 As a civilization expands and uses more and more stuff and space, at some they may start a largest project that extracting the energy of their star by building a Dyson swarm. Once it finished, energy has become unlimited. The next frontier moves to other stars light years away. So the closer a species gets to type 3, they might discover new physics, may understand and control dark matter and energy, or be able to travel faster than light. For them, humans are the ants, trying to understand the galactic metropolitan area. A high type 2 civilization might already consider humanity too primitive. A type 3 civilization might consider us bacteria. But the scale doesn’t end here; some scientists suggest there might be type 4 and type 5 civilizations, whose influences stenches over galaxy clusters or super clusters. This complex scale is just a thought experiment but, still it gives interesting things. Who knows, there might be a type omega civilization, able to manipulate the entire universe, and they even might be the actual creators of our universe.

James Webb space telescope – Working and Application

The James Webb space telescope or JWST will replace the Hubble space telescope. It will help us to see the universe as it was shortly after the big bang. It was named after the second head of NAS James Webb. James Webb headed the office of space affairs from 1961 to 1968. This new telescope was first planned for launch into orbit in 2007 but has since been delayed more than once, now it’s been scheduled for 18 December 2012. After 2030 the Hubble will go on a well deserved rest since its launch in 1990 its provided more than a million images of thousands of stars, nebulae, planets and galaxies. The Hubble captured images of stars that are show about 380 million years after the big bang which supposedly happened 13.7 billion years ago. These objects may no longer exist, we still see their light. Now we expect James Webb to show us the universe as it was only 100 to 250 million years after its birth. It can transform our current understanding of the structure of the universe. The Spitzer space telescope and Hubble telescopes have collected data of gas shells of about a hundred planets. According to experts, the James Webb is capable of exploring the atmospheres of more than 300 different exoplanets.

The working of James Webb space telescope

The James Webb is an orbiting infrared observatory that will investigate the thermal radiation of space objects. When heated to a certain temperature, all solids and liquids emit energy in the infrared spectrum; here there is a relationship between wavelength and temperature. The higher the temperature, there will shorter the wavelength and higher the radiation intensity. James Webb sensitive equipment will be able to study the cold exoplanets with surface temperatures of up to 27° Celsius. An important quality of this new telescope is that it will revolve around the sun and not the earth unlike Hubble which is located at an altitude of about 570 kilometers in low earth orbit. With the James Webb orbiting the sun, it will be impossible for the earth to interfere with it, however he James Webb will move in sync with the earth to maintain strong communication yet the distance from the James Webb to the earth will be between about 374,000 to 1.5 million kilometers in the direction opposite of the sun. So its design must be extremely reliable.

The James Webb telescope weighs 6.2 tones. The main mirror of the telescope is with a diameter of 6.5 meters and a colleting area of 25 square meters, it resembles a giant honeycomb consisting of 18 sections. Due to its impressive size, the main has to be folded for start up; this giant mirror will capture light from the most distant galaxies. The mirror can create a clear picture and eliminate distortion. A special type of beryllium was used in the mirror which retains its shape at low cryogenics temperature. The front of the mirror is covered with a layer of 48.25 grams of gold, 100 nanometers thick; such a coating best reflects infrared radiation. A small secondary mirror opposite the main mirror, it receives light from the main mirror and directs it to instruments at the rear of the telescope. The sunshield is with a length of 20 meters and width of 7 meters. It composed of very thin layers of kapton polyimide film which protects the mirror and tools from sunlight and cools the telescope’s ultra sensitive matrices to 220° Celsius.

The NIRCam- Near Infrared Camera is the main set of eyes of the telescope, with the NIRCam we expect to be able to view the oldest stars in the universe and he planets around them. The nurse back near infrared spectrograph will collect information on both physical and chemical properties of an object. And the MIRI mid-infrared instrument will allow you to see stars being born many unknown objects of the Kepler belt. Then the near infrared imager and sliteless spectrograph or NIRIIS camera is aimed at finding exoplanets and the first light of distant objects. Finally the FGS- Fine Guidance Sensor helps accurately point the telescope for higher quality images updates its position in space sixteen times per second and controls the operation the steering and main mirrors. They are planning to launch the telescope with the help of the European launch vehicle Ariana 5 from the kourou Cosmodrome in French Guiana space center. The device is designed for between 5 to 10 years of operation but, it may serve longer. If everything goes well, $10 billion worth of construction and one year of preparation will have finally started in orbit.

 

Medical breakthroughs – Laproscopy

 

Treating illness b using tools to remove or manipulate pats of the human body is an old idea. Even the minor operations carried high risks, but that doesn’t mean all early surgery failed. Indian doctors, at the beginning centuries before the birth of Christ, successfully removed tumors and performed amputations and other operations. They developed dozens of metal tools, relied on alcohol to dull the patient, and controlled bleeding with hot oil and tar. The 20th century brought even more radical change through technology. Advances in fiber optic technology and the miniaturization of video equipment have revolutionized surgery. The laparoscopy is the James Bond like gadget of the surgeon’s repertoire of instruments. Only a small incision through the patient’s abdominal wall is made into which the surgeon puffs carbon dioxide to open up the passage.

 Using a laparoscope, a visual assessment and diagnosis, and even surgery causes less physiological damage, reduces patient’s pain and speeds their recovery leading to shorter hospital stays. In the early 1900s, Germany’s George Kelling developed a surgical technique in which he injected air into the abdominal cavity and inserted a cytoscope – a tube like viewing scope to assess the patient’s innards. In late 1901, he began experimenting and successfully peered into a dog’s abdominal cavity using the technique. Without cameras, laparoscopy’s use limited to diagnostic procedures carried out by gynecologists and gastroenterologists. By the 1980s, improvements in miniature video devices and fiber optics inspired surgeons to embrace minimally invasive surgery. In 1996, the first live broadcast of a laparoscopy took place. A year later, Dr. J. Himpens used a computer controlled robotic system to aid in laparoscopy. This type of surgery is now used for gallbladder removal as well as for the diagnosis and surgeries of fertility disorder, cancer, and hernias.

Hypothermia is a drop in body temperature significantly below normal can be life threatening, as in the case of overexposure to severe wintry conditions. But in some cases, like that of Kevin Everett of the buffalo bills, hypothermia can be lifesaver. Everett fell to the ground with a potentially crippling spinal cord injury during a 2007 football game. Doctors treating him on the field immediately injected his body with a cooling fluid. At the hospital, they inserted a cooling catheter to lower his body temperature by roughly five degrees, at the same time proceeding with surgery to fix his fractured spine. Despite fears that he would be paralyzed, Everett has regained his ability to walk, and advocates of therapeutic hypothermia feel his lowered body temperature may have made the difference. Therapeutic hypothermia is still a controversial procedure. The side effects of excessive cooling include heart problems, blood clotting, and increased infection risk. On the other hand, supporters claim, it slows down cell damage, swelling, and other destructive processes well enough that it can mean successful surgery after a catastrophic injury. Surgical lasers can generate heat up to 10,000°F on a pinhead size spot, sealing blood vessels and sterilizing. Surgical robots and virtual computer technology are changing medical practice. Robotic surgical tools increase precision. In 1998, heart surgeons at Paris’s Broussais hospital performed the first robotic surgery. New technology allows an enhanced views and precise control of instruments.

“After a complex laparoscopic operation, the 65-year-old patient was home in time for dinner”. – Elisa Birnbaum, surgeon

 

History of Steam Engines – Thomas Savery

Thomas Newcomen, a Devonshire blacksmith, developed the first successful steam engine in the world and used it to pump water from mines. His engine was a development of the thermic siphon built by Thomas Savery, whose surface condensation patents blocked his own designs. Newcomen’s engine allowed steam to condense inside a water-cooled cylinder, the vacuum produced by this condensation being used to draw down a tightly fitting piston that was connected by chains to one end of a huge, wooden, centrally pivoted beam. The other end of the beam was attached by chains to a pump at the bottom of the mine. The whole system was run safely at near atmospheric pressure, the weight of the atmosphere being used to depress the piston into the evacuated cylinder.

 Newcomen’s first atmospheric steam engine worked at conygree in the west midlands of England. Many more were built in the next seventy years, the initial brass cylinders being replaced by larger cast iron ones, some up to 6 feet (1.8 m) in diameter. The engine was relatively inefficient, and in areas where coal was not plentiful was eventually replaced by double-acting engines designed by James Watt. These used both sides of the cylinder for power strokes and usually had separate condensers. James watt was responsible for some of the most important advances in steam engine technology.

In 1765 watt made the first working model of his most important contribution to the development of steam power, he patented it in 1769. His innovation was an engine in which steam condensed outside the main cylinder in a separate condenser. The cylinder remained at working temperature at all times. Watt made several other technological improvements to increase the power and efficiency of his engines. For example, he realized that, within a closed cylinder, low pressure steam could push the piston instead of atmospheric air. It took only a short mental leap for watt to design double-acting engine in which steam pushed the piston first one way, then the other, increasing efficiency still further.

Watt’s influence in the history of steam engine technology owes as much to his business partner, Matthew Boulton, as it does to his own ingenuity. The two men formed a partnership in 1775, and Boulton poured huge amount of money into watt’s innovations. From 1781, Boulton and watt began making and selling steam engines that produced rotary motion. All the previous engines had been restricted to a vertical, pumping action. Rotary steam engines were soon the most common source of power for factories, becoming a major driving force behind Britain’s industrial revolution.

By the age of nineteen, Cornishman Richard Trevithick worked for the Cornish mining industry as a consultant engineer. The mine owners were attempting to skirt around the patents owned by James Watt. William Murdoch had developed a model steam carriage, starting in 1784, and demonstrated it to Trevithick in 1794. Trevithick thus knew that recent improvements in the manufacturing of boilers meant that they could now cope with much higher steam pressure than before. By using high pressure steam in his experimental engines, Trevithick was able to make them smaller, lighter, and more manageable.

Trevithick constructed high pressure working models of both stationary and locomotive engines that were so successful that in 1799 he built a full scale, high pressure engine for hoisting ore. The used steam was vented out through a chimney into the atmosphere, bypassing watt’s patents. Later, he built a full size locomotive that he called puffing devil. On December 24, 1801, this bizarre-looking machine successfully carried several passengers on a journey up Camborne hill in Cornwall. Despite objections from watt and others about dangers of high pressure steam, Trevithick’s work ushered in a new era of mechanical power and transport.

How do we measure distances in space? Light years

In the 1800s, scientists discovered the realm of light beyond what is visible. The 20th century saw dramatic improvements in observation technologies. Now we are probing distant planets, stars, galaxies and black holes where even light would take years to reach. So how we do that? Light is the fastest thing we know in the universe. It is so fast that we measure enormous distances by how long it takes for light to travel them. In one year, light travels about 6 trillion miles. It is the distance, we call one light year. The Apollo 11 had to travel four days to reach the moon but, it is one light second from earth. Meanwhile, the nearest star beyond our own sun is Proxima Centauri but, it is 4.24 light years away. Our Milky Way galaxy is on the order of 100,000 light years across. The nearest galaxy to our own, Andromeda is about 2.5 million light years away.

 The question is how do we know the distance of these stars and galaxies? For objects that are very close by, we can use a concept called trigonometric parallax. When you place your thumb and close your left eye and then, open your left eye and close your right eye. It will look like your thumb has moved, while more distant objects have remained in place. This same concept applies in measuring distant stars. But they are much farther than the length of your arm, and earth is not large enough, even if you had different telescopes across the equator, you would not see much of a shift in position. So we look at the change in the star’s apparent location over six months, when we measure the relative positions of the stars in summer, and then again in winter, nearby stars seem to have moved against the background of the more distant stars and galaxies.

 But this method only works for objects less than a few thousand light years away. So, for such distances, we use a different method using indicators called standard candles. Standard candles are objects whose intrinsic brightness, or luminosity that we know well. For example, if you know how bright your light bulb is, even when you move away from it, you can find the distance by comparing the amount of light you received to the intrinsic brightness. In astronomy, we consider this as a special type of star called a Cepheid variable. These stars will constantly contract and expand. Because of this, their brightness varies. We can calculate the luminosity by measuring the period of this cycle, with more luminous stars changing more slowly. By comparing the light that we received to the intrinsic brightness we can calculate the distance.

 But we can only observe individual stars up to about 40 million light years away. So we have to use another type of standard candle called type 1a supernova. Supernovae are giant stellar explosions which is one of the ways that stars die. These explosions are so bright, that they outshine the galaxies where they occur. So we can use the type 1 a supernovae as standard candles. Because, intrinsically bright ones fade slower than fainter ones. With the understanding of brightness and decline rate, we can use the supernovae to probe distances up to several billions of light years away. But is the importance of seeing distant objects? Well, the light emitted by the sun will take eight minutes to reach us, which means that the light we see now is a picture of the sun eight minutes ago. And the galaxies are million light years away. It has taken millions of years for that light to reach us. So the universe is in some kind of an inbuilt time machine. The further we can look back, the younger we are probing. Astrophysicists try to read the history of the universe, and understand how and where we come from.

“Dream in light years, challenge miles, walk step by step”William Shakespeare

Why Waves Occur? Waves and Tides

Why do waves form?

A wave begins as the wind ruffles the surface of the ocean. When the ocean is calm and glasslike, even the mildest breeze forms ripples, the smallest type of wave. Ripples provide surfaces for wind to act on, which produces larger waves. Stronger winds push the nascent waves into steeper and higher hills of water. The size a wave reaches depends on the speed and strength of the wind. The length of time it takes for the wave to form, and the distance over which it blows in the open ocean is known as the fetch. A long fetch accompanied by strong and study winds can produce enormous waves. The highest point of a wave is called the crest and the lowest point the trough. The distance from one crest to another is known as the wavelength.

On November 11, 2011, US surfer Garrett McNamara surfed a massive wave (78-foot (23,8-meter)) at Nazaré.

Although water appears to move forward with the waves, for the most part water particles travel in circles within the waves. The visible movement is the wave’s form and energy moving through the water, courtesy of energy provided by the wind. Wave speed also varies; on average waves travel about 20 to 50 Mph. Ocean waves vary greatly in height from crest to trough, averaging 5 to 10 feet. Storm waves may tower 50 to 70 feet or more. The biggest wave that was ever recorded by humans was in Lituya bay on July 9th, 1958. Lituya bay sits on the southeast side of Alaska. A massive earthquake during the time would trigger a mega tsunami and the tallest tsunami in modern times. As a wave enters shallow water and nears the shore, it’s up and down movement is disrupted and it slows down. The crest grows higher and be gins to surge ahead of  the rest of the wave, eventually toppling over and breaking apart. The energy released by a breaking wave can be explosive. Breakers can wear down rocky coast and also build up sandy beaches.

Why does a tide occur?

Tides are the regular daily rise and fall of ocean waters. Twice each day in most locations, water rises up over the shore until it reaches its highest level, or high tide. In between, the water recedes from the shore until it reaches its lowest level, or low tide. Tides respond to the gravitational pull of the moon and sun. Gravitational pull has little effect on the solid and inflexible land, but the fluid oceans react strongly. Because the moon is closer, its pull is greater, making it the dominant force in tide formation.

Gravitational pull is greatest on the side of earth facing the moon and weakest on the side opposite to the moon. Nonetheless, the difference in these forces, in combination with earth’s rotation and other factors, allows the oceans to bulge outward on each side, creating high tides. The sides of earth that are not in alignment with the moon experience low tides at this time. Tides follow different patterns, depending on the shape of the seacoast and the ocean floor.  In Nova Scotia, water at high tide can rise more than 50 feet higher than the low tide level. They tend to roll in gently on wide, open beaches in confined spaces, such as a narrow inlet or bay, the water may rise to very high levels at high tide.

There are typically two spring tides and two narrow tides each month. Spring tie of great range than the mean range, the water level rises and falls to the greatest extend from the mean tide level. Spring tides occur about every two weeks, when the moon is full or new. Tides are at their maximum when the moon and the sun are in the same place as the earth. In a semidiurnal cycle the high and low tides occur around 6 hours and 12.5 minutes apart. The same tidal forces that cause tides in the oceans affect the solid earth causing it to change shape by a few inches.

 

 

Black Holes – The Hawking Radiation, definition and facts

When a massive star dies, it leaves a small but dense remnant core in its wake. If the mass of the core is more than 3 times the mass of the sun, the force of gravity overwhelms all other forces and a black hole is formed. Imagine the size of a star is 10times more massive than our sun being squeezed into a sphere with a diameter equal to the size of New York City. The result is a celestial object whose gravitational field is so strong that nothing, not even light can escape it. The history of black holes was started with the father of all physics, Isaac Newton. In 1687, Newton gave the first description of gravity in his publication, Principia mathematica, that would change the world. Then 100 years later, John Michelle proposed the idea that there could exist a structure that would be massive enough and not even light would be able to escape its gravitational pull. In 1796, the famous French scientist Pierre-Simon Laplace made an important prediction about the nature of black holes. He suggested that because even the speed of light was slower than the escape velocity of black hole, the massive objects would be invisible. In 1915, Albert Einstein changed physics forever by publishing his theory of general relativity. In this theory, he explained space time curvature and gave a mathematical description of a black hole. And in 1964, john wheeler gave these objects the name, the black hole.

The “Interstellar” black hole was created using a new CGI rendering software that was based on theoretical equations provided by Thorne.

In classical physics, the mass of a black hole cannot decrease; it can either stay the same or get larger, because nothing can escape a black hole. If mass and energy are added to a black hole, then its radius and surface area also should get bigger. For a black hole, the radius is called the Schwarzschild radius. The second law of thermodynamics states that, an entropy of a closed system is always increases or remains the same. In 1974, Stephen hawking– an English theoretical physicists and cosmologist, proposed a groundbreaking theory regarding a special kind of radiation, which later became known as hawking radiation. So hawking postulated an analogous theorem for black holes called the second law of black hole mechanics that in any natural process, the surface area of the event horizon of a black hole always increase, or remains constant. It never decreases. In thermodynamics, black bodies doesn’t transmit or reflect any radiation, it only absorbs radiation.

When Stephen hawking saw these ideas, he found the idea of shining black holes to be preposterous.  But when he applied the laws of quantum mechanics to general relativity, he found the opposite to be true. He realized that stuff can come out near the event horizon. In 1974, he published a paper where outlined a mechanism for this shine. This is based on the Heisenberg uncertainty Principe. According to the principle of quantum mechanisms, for every particle throughout the universe, there exists an antiparticle. These particles always exist in pairs, and continually pop in and out of existence everywhere in the universe. Typically, these particles don’t last long because as soon as possible and its antiparticle pop into existence, they annihilate each other and cease to exist almost immediately after their creation.

In 2019, the Event Horizon Telescope (EHT) collaboration produced the first-ever image of a black hole

In the event horizon that the point which nothing can escape its gravity. If a virtual particle pair blip into existence very close to the event horizon of a black hole, one of the particles could fall into the black hole while the other escapes. The one that falls into the black hole effectively has negative energy, which is, in Layman’s terms, akin to subtracting energy from the black hole, or taking mass away from the black hole. The other particle of the pair that escapes the black hole has positive energy, and is referred to as hawking radiation. Due to the presence of hawking radiation, a black hole continues to loss mass and continues shrinking until the point where it loses all its mass and evaporates. It is not clearly established what an evaporating black hole would actually look like. The hawking radiation itself would contain highly energetic particles, antiparticles and gamma rays. Such radiation is invisible to the naked eye, so an evaporating black hole might not look like anything at all. It also possible that hawking radiation might power a hadronic fireball, which could degrade the radiation into gamma rays and particles of less extreme energy, which would make an evaporating black hoe visible. Scientists and cosmologists still don’t completely understand how quantum mechanics explains gravity, but hawking radiation continues to inspire research and provide clues into the nature of gravity and how it relates to other forces of nature.

 

The Large Hadron Collider – Most Powerful Particle Accelerator

 The smallest thing that we can see with a light microscope is about 500 nanometers. A typical is anywhere from 0.1 to 0.5 nanometers in diameter. So we need an electron microscope to measure these atoms. The electron microscope was invented in 1931. Beams of electrons are focused on a sample. When they hit it, they are scattered, and this scattering is used to recreate an image. Then what about protons or neutrons? Or what about quarks? The quarks are the most fundamental building blocks of matter. So how did we find such small particles exist? The answer is a particle collider. A particle collider is a tool used to accelerate two beams of particles to collide since 1960s.

The largest machine built by man, the Large Hadron Collider (LHC) is a particle accelerator occupying an enormous circular tunnel of 27 kilometers in circumference, ranging from 165 to 575 feet below ground. It was situated near Genoa, Switzerland. It is so large that over the course of its circumference crosses the border between France and Switzerland. That’s the giant collaboration going on between over 100 countries and 10,000 scientists. The tunnel itself was constructed between 1983 and 1988 to house another particle accelerator, the Large Hadron Collider, which operated until 2000, its replacement, the LHC, was approved in 1995, and was finally switched on in September 2008.

Working of the Large Hadron Collider

 The LHC is the most powerful particle accelerator ever built and has designed to explore the limits of what physicists refer to as the standard Model, which deals with fundamental sub-atomic particles. There are two vacuum pipes are installed inside the tunnel which intersects in some places and 1,232 main magnets are connected to the pipe. For proper operation, the collider magnets need to be cooled to -271.3 °C. To attain this temperature, 120 tons of liquid helium is poured into the LHC. These powerful magnets can accelerate protons near the speed of light, so they can complete a circuit in less than 90 millionths of a second. Two beams operate in opposite directions around the ring. At four separate points the two beams cross, causing protons to smash into each other at enormous energies, with their destructions being witnessed by super-sensitive instruments. But it’s not that easy to do this experiment. Each beam consists of bunches of protons and most of the protons just miss each other and carry on around the ring and do it again. Because, atoms are mostly empty space so getting them to collide is incredibly difficult. It like colliding a needle into a needle, provided that the distance between them is 10 kilometers.

The aim of these collisions is to produce countless new particles that stimulate, on a micro scale, some of the conditions postulated in the Big Bang at the birth of the universe. Higgs Boson was discovered with the help of LHC. This so called ‘God Particle’ that could be responsible for the very existence of mass. If it disappeared, all particles in the universe will become absolutely weightless and fly around the universe in the speed of light, the exact value is 299,792,458 m/s. that mean we can reach our moon in 1.3 seconds from earth.

“When you look at a vacuum in a quantum theory of fields, it isn’t exactly nothing.”Peter Higgs

Public Relations

When the COVID-19 pandemic situation began and the whole world went into lockdown, the taxi and food delivery services company, Uber, released a video on YouTube called “Thank You for Not Riding” that was part of the campaign #MoveWhatMatters. The campaign was an effort by the company of thanking its customers for reducing travel and maintaining social distancing. They compensated Uber drivers around the world who were not able to work for many months and provided free rides and food deliveries to front-line healthcare workers and citizens. This was Uber upholding the responsibility it has in such a challenging situation. In the last two years, a lot of interesting public relations campaigns have been taken up by companies and organizations across the world.

Public Relations or PR refer to the process of communication between an organization, company, or individual, and the public. It is the art and science of talking to the right audience in the right way. Public relations can influence and shape a company’s image, reputation and brand perception. A PR specialist or PR Officer is responsible for maintaining the image of the company they work for. To ensure the company’s good image, they can formulate communication plans and use media and other direct and indirect mediums.

The primary aim of PR is to maintain a good relationship with the public, their target audience, investors, employees and stakeholders which would help the company get a positive reputation, encouraging people to believe the company is honest and relevant.

If PR is related to maintaining a company’s relevance within the public, how is it really different from advertising? Here is the difference. Advertising is paid promotion while PR is earned. Companies pay newspapers, television channels and other media to display their Ads but PR promotes a brand using editorial content appearing in various media. Audiences usually look at Ads skeptically while PR promotions help in building trust in the audience because it has a third-party validation by the medium in which it is promoted. They are also cheaper compared to advertising and marketing services in the industry.

PR is influential in building brand reputation. A good PR agency can help a company improve its credibility and reputation. They make sure the company is getting proper attention and positive feedback for all of their projects, works and news updates. PR also has a very important role to play in crisis management or situations in which the image of a company may be in danger, which may be due to some miscommunication. It is the PR team’s responsibility to communicate with their target audience and public and clear the possible misconceptions. They have to work to get rid of the negative publicity the company may have received.

Here are two more examples of great PR campaigns in India:

#TouchOfCare by Vicks:

Companies often try to bring attention to compelling public issues with their campaign. In 2017, Vicks released a heartwarming video as part of its campaign #TouchOfCare. The video showed how Gauri, a transgender woman, raised an orphan girl, Gayatri, with all the love and care in the world, even when she faced struggles in society. Vicks believes that everyone deserves to be cared for and receive the touch of care. With this video, they showed how everybody needs someone to care and love them, whether they are connected by blood or not. The video got lots of positive feedback, generating about 4 million views in the first 48 hours in which it was released.

#ItsJustAPeriod by Stayfree

Stayfree launched the campaign #ItsJustAPeriod in 2020 to encourage period-related conversations and remove the stigma associated with it in Indian families, particularly as India went into lockdown and schools closed. A video was released on YouTube with many actors and influencers coming forward in support of the movement. With the majority of the Indian population at home, this campaign was able to get a huge social media outreach, with 10.17 million engagements collectively on Facebook, YouTube and Instagram.