We are in the middle of a pandemic which is caused by a virus. Viruses are ultramicroscopic, disease causing entity of organic matter, It can replicate or multiply only in the presence of a living host cell. They are called as entity because they exhibit both living and non living characters and it is difficult to conclude them as any one.
history
Russian botanist Dimitri Ivanovskii reported the virus for the first time in 1892. He explained tobacco mosaic virus. In 1898 M. W. Beijerinek recognized that viruses were smaller than bacteria, and they can easily pass through bacterial filters. In the same year, Friedrich Loeffler and P. Frosch, in Germany demonstrated the cause of Foot and mouth disease in cattles are due to viruses. In 1900 Walter Reed et al. explained yellow fever disease in humans was due to viruses. And in1915 F. W. Twort in England discovered bacteriophages.
In 1935 W. M. Stanley isolated Tobacco mosaic virus and crystalized. He explained that virus also contain proteins. In 1936 Bawden and Pirie demonstrated viruses to be nucleoproteins, they are made up of Nucleic acids and proteins. The major development was determining the ultrastructure of TMV by x-ray diffraction which was accomplished by R. Franklin and A. kling. Discovery of cyanophages was done in 1963.
What are the general characters of a virus
They are very small, so small that even they are not visible under compound microscope. Their size varies from 20nm to 400nm.
they are obligate parasites, They multiply only in the presence of a living host cell.
Viruses do not contain any type of cells.
Like Bacteria, viruses can not be cultured in labs on a synthetic media.
Viruses are highly specific. While infecting they just do not infect randomly any cell. They find specific cells for them which varies from virus to virus. They are also specific to tissues, organ, organism and species.
Viruses can be inactivated by heat or chemicals.
Viruses also lack metabolism. They are totally dependent on the host cell.
Viruses undergo mutation very often.
Living characters shown by viruses:
Are viruses living? we don’t know. But they contain nucleic acids which we consider the basis of life. As every other organism they have definite form and size. They grow and reproduce in the host cell. They can infect the host cells and cause diseases. They show mutation and inherit their characters to their reproduced offspring. These call characters are shown by living organisms.
non living properties of viruses:
Even though they exhibit some of the living characters as mentioned above they lack cellular organizations like other organisms. This arises the biggest confusion. They behave like non living outside the host cell. Some of the viruses can be crystalized and stored in bottles and jars. And they lack metabolism which is the basic characteristic of a living organism.
how viruses are classified?
The types of virus based on the hosts they infect includes, Bacteriophages- Which infect bacteria, Cyanophages- which infect blue green algae, phytophaginae-which infect plants and the Zoophages- which infect animals.
a bacteriophage
The viruses are considered to be DNA virus if the nucleic acid present in them is DNA and RNA virus if the nucleic acid present is RNA.
No matter how they are classified or what are their features, viruses are unique and mostly dangerous to animals. Even though vaccinations can help, Avoiding exposure to them will be the best way to keep oneself safe.
On July 11, Virgin Galactic will make a giant leap toward commercial suborbital spaceflight. The company will launch its first fully crewed flight of its SpaceShipTwo space plane Unity with a special passenger on board: the company’s billionaire founder Richard Branson.
Branson, three crewmates and two pilots will launch on the historic flight after being carried into launch position by Virgin Galactic’s carrier plane VMS Eve. They will take off from the company’s homeport of Spaceport America in New Mexico, with a live webcast chronicling the flight. Here’s everything you need to know about the mission, which Virgin Galactic has dubbed Unity 22.
WHAT TIME IS VIRGIN’S GALACTIC LAUNCH & AND CAN I WATCH?
Virgin Galactic has not released a specific time for the actual Unity 22 launch, but the company has announced it will begin webcasting the mission at 9 a.m. EDT (1300 GMT). And it looks like it’s going to be fun. The crew will walk out to the ship about an hour earlier.
Stephen Colbert, host of The Late Show on CBS, will host the webcast along with singer Khalid (who will debut a new single during the launch), former Canadian Space Agency astronaut Chris Hadfield and future Virgin Galactic astronaut Kellie Gerardi, who will launch on a research flight in 2022.
The webcast will begin with the Unity spacecraft and its carrier plane taking off from its runway at Spaceport America, which is located 55 miles (88 kilometers) north of Las Cruces, New Mexico.
Branson has stated that the entire flight will take about 90 minutes, including the ascent up to launch position, release, flight to space and glide back to Earth for a runway landing at Spaceport America.
Virgin Galactic will launch six people on the Unity 22 flight, although the spacecraft is designed to carry up to eight people (two pilots and six passengers).
Unity 22’s crew includes four mission specialists:
Sirisha Bandla, Vice President of Government Affairs and Research Operations at Virgin Galactic. She will evaluate the human-tended research experience via an experiment from the University of Florida that requires several handheld fixation tubes to be activated at various points in the flight profile.
Colin Bennett, Lead Operations Engineer at Virgin Galactic. He will evaluate cabin equipment, procedures and the experience during the boost phase and weightless environment inside Unity.
Sir Richard Branson, founder of Virgin Galactic. Branson will evaluate the private astronaut experience. He will receive the same training, preparation and flight as Virgin Galactic’s future ticket-buying astronauts and use the flight to fine ways to enhance the experience for customers.
Beth Moses, Chief Astronaut Instructor at Virgin Galactic. She will serve as cabin lead and test director in space. Her tasks include overseeing the safe execution of the test flight objectives. Moses has launched on Unity before.
Two veteran Virgin Galactic pilots will be at the helm of Unity during the launch. They have both launched to space on Unity before and are:
Dave Mackay: Mackay is Virgin Galactic’s chief pilot and grew up in the highlands of Scotland. He is a former Royal Air Force pilot and flew for Branson’s airline company Virgin Atlantic before joining Virgin Galactic.
Michael Masucci: Michael “Sooch” Masucci is a retired U.S. Air Force colonel who joined Virgin Galactic in 2013 who racked up over 9,000 flying hours in 70 different types of airplanes and gliders during more than 30 years of civilian and military flight.
Two other pilots will fly the VMS EVE carrier plane that will carry SpaceShipTwo into launch altitude. They are:
Frederick “CJ” Sturckow: A former NASA space shuttle commander who joined Virgin Galactic in 2013 with Masucci. A retired Marine Corps colonel, he was the first NASA astronaut to join the company and flew four space shuttle missions.
Kelly Latimer: Latimer is a test pilot and retired lieutenant colonel in the U.S. Air Force who joined Virgin Galactic’s pilot corps in 2015. She was the first female research test pilot to join what is now NASA’s Armstrong Flight Research Centre.
The primary objective for Unity 22 is to serve as a test flight for future passenger flights by Virgin Galactic. As its number suggests, this will be the 22nd flight of Unity, but only its fourth launch to space.
The four mission specialists will each evaluate different experiences that Virgin Galactic has promised its future customers, many of whom have already reserved trips to space with the company at $250,000 a seat.
Bandla, for example, will test the experience of performing experiments aboard Unity during different phases of the flight, including the weightless period. Branson will take note of the flight as a paying passenger to look for ways to enhance the trip for ticket holders looking for the experience of a lifetime.
Moses is Virgin Galactic’s Chief Astronaut Trainer and will ensure everyone is safe in their tests while Bennet will examine Unity’s cabin performance to look for potential enhancements.
This mission is a critical flight or Virgin Galactic, which Branson founded in 2004. VSS Unity is the company’s second SpaceShipTwo after the first, VSS Enterprise, broke apart during a 2014 test flight, killing one pilot and seriously injuring another. Virgin Galactic has made numerous safety upgrades to prevent such an accident from happening again.
The mission will begin with takeoff from Spaceport America, where Virgin Galactic has built its “Gateway to Space” terminal to serve its future customers. The crews of Unity and Eve will walk out to their vehicles at about 8 a.m. EDT (6 a.m. local time, 1200 GMT). They’ll be wearing custom Under Armour flight suits made for Virgin Galactic.
After takeoff, the carrier plane VMS EVE will haul the SpaceShipTwo VSS Unity (short for Virgin Space Ship) to an altitude of about 50,000 feet (15,000 meters), when it will drop the the spacecraft.
Virgin Galactic’s first test passenger Beth Moses looks out the window of the VSS Unity during a test flight with pilots Dave Mackay and Michael “Sooch” Masucci, on Feb. 22, 2018. (Image credit: Virgin Galactic)
After separation, Unity will ignite its hybrid rocket motor, which uses a mixture of solid and liquid propellant, to begin the boost phase. This will carry Unity to its target altitude above 50 miles (80 kilometers), where the pilots and crew can expect up to 4 minutes of weightlessness. They will exist their seats and enjoy sweeping views of the Earth below through the many round windows that dot the space plane’s fuselage.
After that short encounter with weightlessness, the crew will climb back into their seats as Unity prepares to return to Earth. Pilots Mackay and Masucci will have “feathered” the spacectraft’s twin tail booms to provide stability during atmospheric reentry.
The feathered tail will then be locked back into place for the glide back to Earth, which will end with a runway landing at Spaceport America. The entire flight, from takeoff to landing, should last about 90 minutes, Branson has said.
WILL VIRGIN GALACTIC REALLY REACH SPACE WITH UNITY 22?
Virgin Galactic’s VSS Unity spaceliner captured this view of Earth during the vehicle’s first trip to space, on Dec. 13, 2018. (Image credit: Virgin Galactic)
Virgin Galactic will launch Unity to an altitude above 50 miles (80 km), which NASA, the Federal Aviation Administration and the U.S. military classify as space. They will earn astronaut wings for reaching that height.
Another widely recognized boundary of space, the Kármán line, is at an altitude at 62 miles (100 km) above Earth. The SpaceShipTwo VSS Unity won’t reach this milestone, which has led Virgin Galactic’s competitor Blue Origin (which does fly higher than 62 miles) to call out Virgin Galactic for missing that mark.
WHERE DOES VIRGIN GALACTIC LAUNCH SPACESHIPTWO FROM?
Click here for more Space.com videos…Sorry, the video player failed to load.(Error Code: 101102)Advertisementhttps://e42edf55977935dba9fbedb9fb4dde49.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html
Virgin Galactic initially launched SpaceShipTwo test flights from the company’s facilities at Mojave Air and Space Port in California. However, in 2020 the company moved Unity and its carrier craft to its permanent home at Spaceport America, where it plans to fly regular passenger flights beginning in 2022.
Spaceport America is located near Las Cruces, New Mexico and is home to Virgin Galactic’s “Gateway to Space” terminal, a welcome center and waiting room for ticketed passengers preparing for trips to space. It also sports a large hangar designed to fit multiple SpaceShipTwo spaceplanes and the VMS Eve. Virgin Galactic has also built a new vehicle, the SpaceShip III VSS Imagine.
WHEN COULD I LAUNCH TO SPACE WITH VIRGIN GALACTIC?
If you booked a trip with Virgin Galactic early and have one of the first reservations, you may get your chance to fly in space as early as 2022. If not, there’s a long wait ahead. And that’s assuming you can afford the $250,000 ticket price.
Virgin Galactic has said it plans to begin passenger launches in 2022 after a series of final test flights in 2021. The company does have hundreds of reservations for customer flights in backlog from eager would-be astronauts that have been waiting for over 17 years (since Richard Branson first announced Virgin Galactic in 2004) for the SpaceShipTwo to finally fly. The company paused taking new reservations after the 2014 accident.
Virgin Galactic is expected to resume taking reservations for “a limited number of tickets for future spaceflights” sometime this year, according to its website.
Alpha Centauri is the third-brightest star in our night sky – a famous southern star – and the nearest star system to our sun. Through a small telescope, the single star we see as Alpha Centauri resolves into a double star. This pair is just 4.37 light-years away from us. In orbit around them is Proxima Centauri, too faint to be visible to the unaided eye. At a distance of 4.25 light years, Proxima is the closest-known star to our solar system. Science of the Alpha Centauri system. The two stars that make up Alpha Centauri, Rigil Kentaurus and Toliman, are quite similar to our sun. Rigil Kentaurus, also known as Alpha Centauri A, is a yellowish star, slightly more massive than the sun and about 1.5 times brighter. Toliman, or Alpha Centauri B, has an orangish hue; it’s a bit less massive and half as bright as the sun. Studies of their mass and spectroscopic features indicate that both these stars are about 5 to 6 billion years old, slightly older than our sun.
Alpha Centauri A and B are gravitationally bound together, orbiting about a common center of mass every 79.9 years at a relatively close proximity, between 40 to 47 astronomical units (that is, 40 to 47 times the distance between the Earth and our sun).Must Watch Sky Events in 2021
In comparison, Proxima Centauri is a bit of an outlier. This dim reddish star, weighing in at just 12 percent of the sun’s mass, is currently about 13,000 astronomical units from Alpha Centauri A and B. Recent analysis of ground- and space-based data, published in 2017, has shown that Proxima is gravitationally bound to its bright companions, with a 550,000-year-long orbital period.
Proxima Centauri belongs to a class of low mass stars with cooler surface temperatures, known as red dwarfs. It’s also what’s know as a flare star, where it randomly displays sudden bursts of brightness due to strong magnetic activity.
In the past decade, astronomers have been searching for planets around the Alpha Centauri stars; they are, after all, the closest stars to us so the odds of detecting planets, if any existed, would be higher. So far, two planets have been found orbiting Proxima Centauri, one in 2016 and another in 2019. A paper published in February 2021 reported tantalizing evidence of a Neptune-sized planet around Alpha Centauri A, but so far, it has not been definitively confirmed.
How to see Alpha Centauri. Unluckily for many of us in the Northern Hemisphere, Alpha Centauri is located too far to the south on the sky’s dome. Most North Americans never see it; the cut-off latitude is about 29° north, and anyone north of that is out of luck. In the U.S. that latitudinal line passes near Houston and Orlando, but even from the Florida Keys, the star never rises more than a few degrees above the southern horizon. Things are a little better in Hawaii and Puerto Rico, where it can get 10° or 11° high.
But for observers located far enough south in the Northern Hemisphere, Alpha Centauri may be visible at roughly 1 a.m. (local daylight saving time) in early May. That is when the star is highest above the southern horizon. By early July, it reaches its highest point to the south at nightfall. Even so, from these vantage points, there are no good pointer stars to Alpha Centauri. For those south of 29° N. latitude, when the bright star Arcturus is high overhead, look to the extreme south for a glimpse of Alpha Centauri.
The southern constellation Centaurus. Image via Wikimedia/ International Astronomical Union/ SkyandTelescope.com.
Observers in the tropical and subtropical regions of the Northern Hemisphere can find Alpha Centauri by first identifying the distinctive Southern Cross. A short line drawn through the crossbar (Delta and Beta Crucis) eastward first comes to Hadar (Beta Centauri), then Alpha Centauri. Meanwhile, in Australia and much of the Southern Hemisphere, Alpha Centauri is circumpolar, meaning that it never sets.
In this image taken at the European Southern Observatory’s La Silla Observatory in Chile, the Southern Cross is clearly visible, with the yellowish star, closest to the dome, marking the top of the cross. Drawing a line downward through the crossbar stars takes you to the bluish star, Beta Centauri, and then to the yellowish Alpha Centauri. Image via ESO / Wikimedia Commons.
Alpha Centauri in mythology. Alpha Centauri has played a prominent role in the mythology of cultures across the Southern Hemisphere. For the Ngarrindjeri indigenous people of South Australia, Alpha and Beta Centauri were two sharks pursuing a sting ray represented by stars of the Southern Cross. Some Australian aboriginal cultures also associated stars with family relationships and marriage traditions; for instance, two stars of the Southern Cross were through to be the parents of Alpha Centauri.
Astronomy and navigation were deeply intertwined in the lives of ancient seafaring Polynesians as they sailed between islands in the vast expanse of the South Pacific. These ancient mariners navigated using the stars, with cues from nature such as bird movements, waves, and wind direction. Alpha Centauri and nearby Beta Centauri, known as Kamailehope and Kamailemua, respectively, were important signposts used for orientation in the open ocean.
For ancient Incas, a llama graced the sky, traced out by stars and dark dust lanes in the Milky Way from Scorpius to the Southern Cross, with Alpha Centauri and Beta Centauri representing its eyes.
A plaque at the Coricancha museum showing Inca constellations. Coricancha, located in Cusco, Peru, was perhaps the most important temple of the Inca empire. Image via Pi3.124 / Wikimedia Commons.
Ancient Egyptians revered Alpha Centauri, and may have built temples aligned to its rising point. In southern China, it was part of a star group known as the South Gate.
Alpha Centauri is the brightest star in the constellation Centaurus, named after the mythical half human, half horse creature. It was thought to represent an uncharacteristically wise centaur that figured in the mythology of Heracles and Jason. The centaur was accidentally wounded by Heracles, and placed into the sky after death by Zeus. Alpha Centauri marked the right front hoof of the centaur, although little is known of its mythological significance, if any.
A depiction of the Centaur by Polish astronomer Johannes Hevelius in his atlas of constellations, Firmamentum Sobiescianum, sive Uranographia. Image via Wikimedia Commons.
Alpha Centauri’s position is RA: 14h 39m 36s, Dec: -60° 50′ 02″
Bottom line: Alpha Centauri is actually two binary stars that are quite similar to our sun. A third star that’s gravitationally bound to them is Proxima Centauri, the closest star to our sun.
NASA’s Kepler mission has confirmed the first near-Earth-size planet in the “habitable zone” around a sun-like star. This discovery and the introduction of 11 other new small habitable zone candidate planets mark another milestone in the journey to finding another “Earth.”
The newly discovered Kepler-452b is the smallest planet to date discovered orbiting in the habitable zone — the area around a star where liquid water could pool on the surface of an orbiting planet — of a G2-type star, like our sun. The confirmation of Kepler-452b brings the total number of confirmed planets to 1,030.
“On the 20th anniversary year of the discovery that proved other suns host planets, the Kepler exoplanet explorer has discovered a planet and star which most closely resemble the Earth and our Sun,” said John Grunsfeld, associate administrator of NASA’s Science Mission Directorate at the agency’s headquarters in Washington. “This exciting result brings us one step closer to finding an Earth 2.0.”
Kepler-452b is 60 percent larger in diameter than Earth and is considered a super-Earth-size planet. While its mass and composition are not yet determined, previous research suggests that planets the size of Kepler-452b have a good chance of being rocky.
Highlighted are 12 new planet candidates from the seventh Kepler planet candidate catalog that are less than twice the size of Earth and orbit in the stars’ habitable zoneCredits: NASA Ames/W. StenzelTwelve New Small Kepler Habitable Zone Candidates
There are 4,696 planet candidates now known with the release of the seventh Kepler planet candidate catalog – an increase of 521 since the release of the previous catalog in January 2015.Credits: NASA/W. StenzelRead more…
While Kepler-452b is larger than Earth, its 385-day orbit is only 5 percent longer. The planet is 5 percent farther from its parent star Kepler-452 than Earth is from the Sun. Kepler-452 is 6 billion years old, 1.5 billion years older than our sun, has the same temperature, and is 20 percent brighter and has a diameter 10 percent larger.
“We can think of Kepler-452b as an older, biggercousin to Earth, providing an opportunity to understand and reflect upon Earth’s evolving environment,” said Jon Jenkins, Kepler data analysis lead at NASA’s Ames Research Center in Moffett Field, California, who led the team that discovered Kepler-452b. “It’s awe-inspiring to consider that this planet has spent 6 billion years in the habitable zone of its star; longer than Earth. That’s substantial opportunity for life to arise, should all the necessary ingredients and conditions for life exist on this planet.”
To help confirm the finding and better determine the properties of the Kepler-452 system, the team conducted ground-based observations at the University of Texas at Austin’s McDonald Observatory, the Fred Lawrence Whipple Observatory on Mt. Hopkins, Arizona, and the W. M. Keck Observatory atop Mauna Kea in Hawaii. These measurements were key for the researchers to confirm the planetary nature of Kepler-452b, to refine the size and brightness of its host star and to better pin down the size of the planet and its orbit.
The Kepler-452 system is located 1,400 light-years away in the constellation Cygnus. The research paper reporting this finding has been accepted for publication in The Astronomical Journal.
In addition to confirming Kepler-452b, the Kepler team has increased the number of new exoplanet candidates by 521 from their analysis of observations conducted from May 2009 to May 2013, raising the number of planet candidates detected by the Kepler mission to 4,696. Candidates require follow-up observations and analysis to verify they are actual planets.
Twelve of the new planet candidates have diameters between one to two times that of Earth, and orbit in their star’s habitable zone. Of these, nine orbit stars that are similar to our sun in size and temperature.
“We’ve been able to fully automate our process of identifying planet candidates, which means we can finally assess every transit signal in the entire Kepler dataset quickly and uniformly,” said Jeff Coughlin, Kepler scientist at the SETI Institute in Mountain View, California, who led the analysis of a new candidate catalog. “This gives astronomers a statistically sound population of planet candidates to accurately determine the number of small, possibly rocky planets like Earth in our Milky Way galaxy.”
These findings, presented in the seventh Kepler Candidate Catalog, will be submitted for publication in the Astrophysical Journal. These findings are derived from data publicly available on the NASA Exoplanet Archive.
Scientists now are producing the last catalog based on the original Kepler mission’s four-year data set. The final analysis will be conducted using sophisticated software that is increasingly sensitive to the tiny telltale signatures of Earth-size planets.
Ames manages the Kepler and K2 missions for NASA’s Science Mission Directorate. NASA’s Jet Propulsion Laboratory in Pasadena, California, managed Kepler mission development. Ball Aerospace & Technologies Corporation operates the flight system with support from the Laboratory for Atmospheric and Space Physics at the University of Colorado in Boulder.
For more information about the Kepler mission, visit:
When you drink too much water, you may experience water poisoning, intoxication, or a disruption of brain function. This happens when there’s too much water in the cells (including brain cells), causing them to swell. When the cells in the brain swell they cause pressure in the brain. You may start experiencing things like confusion, drowsiness, and headaches. If this pressure increases it could cause conditions like hypertension (High Blood Pressure) and bradycardia (Low Heart Rate).
Sodium is the electrolyte most affected by overhydration, leading to a condition called hyponatremia. Sodium is a crucial element that helps keep the balance of fluids in and out of cells. When its levels drop due to a high amount of water in the body, fluids get inside the cells. Then the cells swell, putting you at risk of having seizures, going into a coma, or even dying.
Signs That You’re Drinking Too Much Water
The color of your urine. One of the best ways to determine if you’re drinking enough water is to monitor the color of your urine. It usually ranges from pale yellow to tea-colored due to the combination of the pigment urochrome and the water level in your body. If the pee is often clear, that’s a sure sign you’re drinking too much water in a short span.
Too many bathroom trips. Another sign is if you’re relieving yourself more than usual. On average, you should urinate six to eight times a day. Going up to 10 times is normal for water-drinking high achievers or people who regularly drink caffeine or alcohol.
Drinking water even when you’re not thirsty. A third way to avoid drinking too much water is to be aware of when your body needs it. The body can fight against dehydration by letting you know when you need to drink some water. Thirst is the body’s response to dehydration and should be your guiding cue.
In terms of marketing, publicity is the public visibility or awareness of a product, service for any organisation or business or company. It may also refer to the movement of information from its source to general public often (but not necessary) via the media. Here are some of the ways by which Influencers impact the publicity of a brand:
1) Building Awareness about the Brand:
Social Media Influencers on various social media platforms establish credibility in a particular industry (like fashion) and there is a huge gap between brands and their end users. Influencers try to bridge this gap as they take consumers through the “decision making phase” and thus help in creating a positive brand reputation.
2) Informing people about the product:
One of the main reasons why consumers trust influencers is that they relate with them on a personal level. Influencers know about the major everyday needs of an individual and hence they keep their followers “up-to- date” about latest trends regarding existing products, services or giving details about latest products.
3) Sponsor products in their profiles:
Sometimes social media influencers prefer fixed payments to feature the products of a brand or feature brands on their profile for a specific period of time or they may feature the products in their social media posts. This directly helps in general awareness of the brand.
4) Giving Prizes:
Social Media Contest is one of the most new ways that increases popularity of brands or product of brands. Business firms collaborate with social media influencers and it is the most cost effective technique to build awareness and general interest in brands. This is a typical strategy and it may require certain condition in exchange of prizes such as following the official page of the brand, tagging friends, liking posts and like.
These things directly help in increasing the online engagement and further increasing general interest in the brand.
Source: Januz Wielki.
The above graph is from a recent survey conducted in August 2020 and it shows that majority of respondents feel “transfer of information” about a product is the main thing they look when it comes to social media influencers. Increasing brand awareness and brand loyalty are the next two elements that respondents look when it comes to social media influencers.
This acts as evidence that social media influencers are very impactful when it comes to promotion and publicity of a product or a service. In this modern era, where social media is the new “illusion” of people, social media influencers help in building and increasing brand awareness and thereby they assist in promotion and publicity. Good Marketing Managers would tap this opportunity of “social media influencer marketing” to enhance the audience reach of their business.
Malnutrition is a condition that outcomes from eating an eating routine which doesn’t gracefully a sound measure of at least one nutrients. This incorporates abstains from food that have too little nutrients or so numerous that the eating regimen messes wellbeing up. The supplements included can incorporate calories, protein, starches, fat, nutrients or minerals. An absence of supplements is called undernutrition or undernourishment while an overflow of supplements cases over-nutrition. Hunger is regularly used to allude to undernutrition – when an individual isn’t getting enough calories, protein, or micronutrients. On the off chance that undernutrition happens during pregnancy, or before two years old, it might bring about lasting issues with physical and mental turn of events. Extraordinary undernourishment, known as starvation or ongoing craving, may have indications that include: a short tallness, slender body, helpless energy levels, and swollen legs and mid-region. The individuals who are malnourished regularly get diseases and are habitually cold. The side effects of micronutrient inadequacies rely upon the micronutrient that is inadequate. Undernourishment is regularly because of an absence of great food which is accessible to eat. This is regularly identified with high food costs and neediness. An absence of breastfeeding may add to undernourishment.
Irresistible infections, for example, gastroenteritis, pneumonia, jungle fever, and measles, which increment supplement necessities, can likewise cause hunger. Regular micronutrient insufficiencies incorporate an absence of iron, iodine, and vitamin A. Lacks may turn out to be more normal during pregnancy, because of the body’s expanded need of supplements. In some non-industrial nations, overnutrition as corpulence is introducing inside similar networks as undernutrition. This is on the grounds that the food that is frequently accessible isn’t solid. Different reasons for lack of healthy sustenance incorporate anorexia nervosa and bariatric medical procedure. Lack of healthy sustenance expands the danger of contamination and irresistible sickness, and moderate hunger debilitates all aspects of the invulnerable framework. For instance, it is a significant danger factor in the beginning of dynamic tuberculosis. Protein and energy lack of healthy sustenance and insufficiencies of explicit micronutrients (counting iron, zinc, and nutrients) increment powerlessness to contamination. Ailing health influences HIV transmission by expanding the danger of transmission from mother to kid and furthermore expanding replication of the infection. In people group or regions that need admittance to safe drinking water, these extra wellbeing hazards present a basic issue.
TYPES OF MALNUTRITION
It can lead to serious health issues, including stunted growth, eye problems, diabetes and heart disease. It basically is of two types:
• Under-nutrition This kind of Malnutrition results from not getting enough protein, calories or micronutrients. Undernutrition is caused fundamentally by a lacking admission of dietary energy, whether or not some other explicit nutrients is a restricting element. Undernutrition ordinarily results from not getting enough supplements in your eating routine. This can cause Weight loss, Loss of fat and bulk, Hollow cheeks and indented eyes, A swollen stomach, Dry hair and skin, Delayed injury mending, Fatigue, Difficulty concentrating, Irritability, Depression and uneasiness. Individuals with undernutrition may have one or a few of these indications. A few sorts of undernutrition have signature impacts.
Kwashiorkor, a serious protein insufficiency, causes liquid maintenance and a projecting midsection. Then again, the condition marasmus, which results from serious calorie lack, prompts squandering and critical fat and muscle misfortune.
Undernutrition can likewise bring about micronutrient inadequacies. The absolute most regular lacks and their indications include:
Vitamin A: Dry eyes, night visual deficiency, expanded danger of contamination.
Zinc: Loss of hunger, hindered development, postponed mending of wounds, balding, looseness of the bowels.
Iodine: Enlarged thyroid organs (goiter), diminished creation of thyroid hormone, development and improvement issues. Since undernutrition prompts genuine actual issues and medical issues, it can expand your danger of death. It’s assessed that hindering, squandering and zinc and Vitamin A lacks added to up to 45% of all kid passings in 2011.
• Over-nutrition Overconsumption of specific nutrients, for example, protein, calories or fat, can likewise prompt hunger. This normally brings about overweight or heftiness. The primary indications of overnutrition are overweight and heftiness, however it can likewise prompt nutrient lacks. Research shows that individuals who are overweight or large are bound to have lacking admissions and low blood levels of specific nutrients and minerals contrasted with the individuals who are at a typical weight. One investigation in 285 youths found that blood levels of Vitamin A and E in large individuals were 2–10% lower than those of typical weight members. This is likely in light of the fact that overweight and obesity can result from an overconsumption of quick and prepared nourishments that are high in calories and fat yet low in different nutrients.
An investigation in more than 17,000 grown-ups and kids found that the individuals who ate inexpensive food had altogether lower intake of vitamin A and C and unhealthy, fat and sodium utilisation than the individuals who kept away from this kind of food.
COMMON CAUSES OF MALNUTRITION
Common causes of malnutrition include:
Food insecurity or a lack of access to sufficient and affordable food: Studies link food insecurityin both developing and developed nations to malnutrition.
Digestive problems and issues with nutrient absorption: Conditions that cause malabsorption,such as Crohn’s disease, Celiac disease and bacterial overgrowth in the intestines, can causemalnutrition.
Excessive alcohol consumption: Heavy alcohol use can lead to inadequate intake of protein,calories and micronutrients.
Mental health disorders: Depression and other mental health conditions can increasemalnutrition risk. One study found that the prevalence of malnutrition was 4% higher in peoplewith depression compared to healthy individuals.
Inability to obtain and prepare foods: Studies have identified being frail, having poor mobilityand lacking muscle strength as risk factors for malnutrition. These issues impair food preparation skills.
Developers spend countless hours solving business problems with code. Then comes a never ending part where ops team’s turn to spend countless hours figuring out how to get the code that developers write up and running on whatever computers are available and making sure those computers operate smoothly. Serverless computing represents an enhancement of cloud programe models, abstraction, and platforms, and is a command to the attainment and wide acceptance of cloud technologies.
What is serverless computing?
Serverless computing is a cloud computing implementation model in which the cloud provider deals with machine resources on demand, taking care of the servers on behalf of their customers. It does not hold resources in volatile memory; computing is rather done in short bursts with the results persisted to storage. When an app is not in use, there are no computing resources allocated to the app. It is an execution model for the cloud in which Some of the Common languages supported by serverless runtimes are Java, Python and PHP. Amazon’s AWS Lambda was the first serverless platform and it defined several key dimensions including cost, programming model, deployment, resource limits,security, and monitoring. Supported languages include Node.js, Java, Python, and C programming. Initial versions had limited composability but this has been addressed recently.
Current trend
1.Google Cloud Functions : It provides basic FaaS functionality to run serverless functions written in Node. The functionality is currently limited but expected to grow in future versions.
2.Microsoft Azure Functions: It provides HTTP webhooks and integration with Azure services to run user provided functions. The platform supports C , Node.js, Python, PHP, bash, or any executable. The runtime code is open-source and available on GitHub under an MIT License. To ease debugging, the Azure Func-tions CLI provides a local development experience for creating, developing, testing,running, and debugging Azure Functions.
3.IBM OpenWhisk provides event-based serverless programming with the ability to chain serverless functions to create composite functions. It supportsNode.js, Java, Swift, Python, as well as arbitrary binaries embedded in a Docker Container. OpenWhisk is available on GitHub under an Apache open source license.Besides There are several serverless projects ranging from open source projects to vendors that find serverless a natural fit for their business. OpenLambda is an open source serverless computing platform. The source code is available in GitHub Lunder an Apache License. It’s paper outlines a number of challenges around performance such as supporting faster function startup time for heterogeneous language runtimes and across a load balanced pool of servers, deployment of large amounts of code, supporting stateful interactions (such as HTTP sessions), etc
4.AWS Lambda: It is a serverless compute service that lets you run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code as a ZIP file or container image, and Lambda automatically and precisely allocates compute execution power and runs your code based on the incoming request or event, for any scale of traffic. You can set up your code to automatically trigger from over 200 AWS services and SaaS applications or call it directly from any web or mobile app. You can write Lambda functions in your favorite language (Node.js, Python, Go, Java, and more) and use both serverless and container tools, such as AWS SAM or Docker CLI, to build, test, and deploy your functions.
Advantages
1.No infrastructure to maintain :Serverless computer services, which are small snippets of code meant to execute a single function are executed on pre-existing servers that run functions for countless other customers as well. Since you’re literally using someone else’s computer to execute your serverless functions, there’s no infrastructure to maintain it.
2.No costs : when functions aren’t running As Hacker Noon points out, the costs associated with serverless computing are minimal compared to other cloud services. Access authorization, presence detection, security, image processing, and other costs associated with operating a server, whether physical or virtual, are eliminated under a serverless model. In short, serverless functions can be dirt cheap, and if they aren’t being spun up for use, you aren’t paying anything.
3.Infinitely scalable : Automatic scaling capability of any serverless platform worth investing in is designed to scale based on need. That’s another advantage to serverless computing as there’s never a need to partition a new cloud server or purchase additional computing power for an existing instance. All of that is handled by the serverless computing platform, leaving you with no complication outside of a slightly larger bill for additional computing time.
4.Reduced latency : Cloud flare points out that using serverless functions can greatly reduce the latency experienced by end users. Serverless functions don’t operate from an origin server, so there’s no single location that an end user’s traffic has to be directed to.
5.Reduced software complexity : Serverless computing functions don’t need to take any of that into account–the code just has to be supported by the cloud platform being used. On top of being easier to build, serverless functions require a lot less coding knowledge to build, which opens up development to those at lower skill levels. As cloud native systems inherently scale down as well as up, these systems are known as elastic rather than scalable. Small teams of developers are able to run code themselves without the dependence upon teams of infrastructure and support engineers; more developers are becoming DevOps skilled and distinctions between being a software developer or hardware engineers are blurring.
Disadvantages
1.Security issues : server that runs serverless functions runs them for myriad customers, which opens up a lot of security concerns.
2.Vendor lock-in : Building serverless functions on one platform can mean that migrating to another is difficult. Code might need to be re-written, APIs that exist on one platform may not exist. you’re going to invest in a serverless platform, be sure the vendor you’re considering has everything you need because becoming unhappy with your serverless computing provider a few months or years into your service can be a major problem.
3.Debugging is more difficult : Every time a serverless instance spins up it creates a new version of itself, and that means it’s difficult to collect data necessary to debug and fix a serverless function. Debugging serverless functions is possible, but it’s not a simple task, and it can eat up lots of time and resources.
Conclusion
Evolution of the trend towards higher levels of abstractions in cloud programming models, and currently exemplified by the Function as a Service (FaaS) model where developers write small stateless code snippets and allow the platform to man-age the complexities of scalably executing the function in a fault tolerant manner.This seemingly restrictive model nevertheless lends itself well to a number of common distributed application patterns, including compute intensive event processing pipelines. Most of the large cloud computing vendors have released their own serverless platforms.
Area-51 is a highly protected, secured and restricted part of the USA for many years. America didn’t accept that there is a place called Area-51 in their country until 2003. America didn’t reveal what actually happens inside that place. That place is highly protected from outside world. In September 20, 2019, a wierd thing happened in Area-51. People crowded in millions and protested against the government that Aliens should be protected from humans. They planned to get inside the Area-51 and see what was inside it. They said there should be aliens inside and humans were torturing the aliens and the people were going to save the aliens.
First of all, a funny Facebook event called ‘Storm Area-51’ was arranged to get into Area-51 and see what actually happens there. They believed that there were aliens inside Area-51.
What exactly is Area-51?
After World war1, the government of USA panned to test their weapons and bombs in a very big open place. They chose a desert called Nevada in the USA. They divided the desert into 30 areas each of square shape. In each area, they used to test specific weapon that they are preparing to use in World war 2. The advantage for the USA is that this place is located in the middle of a desert so that any nation can’t find that place and also even an American citizen has to travel many kilometres and cross high security to get into that place. So literary it was impossible for anyone to know what happens inside that place.
Even after the World war 2, there prevailed a cold war between the USA and Russia. The USA continued to test their weapons in that place and maintained their secretive nature. Among the areas from Area-1 to Area-30, there was Area-15. Near to Area-15, there was also a big lake initially. After the lake had gone dry, the government decided to build an underground place called Area-51 so that the inside of that area won’t be visible from outside. The interesting thing is, even when seeing from the top nothing will be visible except a flight landing strip. There are no road facility or train facility to go there. The only transport in and out is through flight.
People in the USA were gone mad to know what was inside Area-51. There was a hill called ‘Challenger cliff’ that was few kilometres far from Area-51. People tried to see Area-51 from the top of that hill. But no one knew what’s inside. They could see only the outside of it. Knowing this, the government made that hill as a restricted area. Nearly for a range of 30 kilometres around the Area-51, no one could enter there. The security was that high. Seeing that the government makes fights to go in and out of that area and keeping it that secured, people were curious to know what was inside. This continued for many years. The government keeps quiet about this till now.
We will continue this reading journey check “Secrets of Area-51 – Part2”. That’s our next blog.
The process of evolution of different species in a given geographical area starting from a point and literally radiating to other areas of geography (habitats) is called Adaptive Radiation.
Evolution of the Finches
During his journey Darwin went to Galapagos Islands. There he observed an amazing diversity of creatures. Small black birds later called Darwin’s Finches amazed him. He realised that there were varieties of finches in the same island.
From the original seed-eating features , many evolved on the island itself. From the original seed-eating features, many other forms with altered beaks arose, enabling them to become insectivorous and vegetarian finches. This process is called adaptive radiation.
“The principle of adaptationism has been adopted so widely by Darwinians because it is such a heuristic methodology.”
“Adaptive radiation refers to the adaptation (via genetic mutation) of an organism which enables it to successfully spread, or radiate, into other environments.”
Adaptive radiation of marsupials
Darwin’s finches represent one of the best examples of this phenomenon. Another example if Australian marsupials. A number of marsupials, each different from the other evolved from an ancestral stock, but all within the Australian island continent.
When more than one adaptive radiation appeared to have occurred in an isolated geographical area( representing different habitats) , one can call this convergent evolution.
Placental mammals in Australia also exhibit adaptive radiation in evolving into varieties of such placental mammals each of which appears to be ‘similar’ to a corresponding marsupial.
“Speciation is the development of one of multiple new species in the evolutionary process, where the original species produces mutated forms which successfully survive in other environments due to these mutations.”
“Phylogenetics is the study of the evolutionary steps a species has taken during the process of speciation.”
The spreading out and mixing of a substance with another substance due to motion of its particles is called diffusion It is based on motion of its particles and is fastest in gases and slowest in solids. The rate of diffusion increases on increasing temp. (kinetic energy increases giving faster motion to particles). Light gases diffuses faster than heavier ones. Egs- smell of food reaches us even at considerable distances, smell of perfume is spread all over the room, spreading of ink in water on its own when put undisturbed for sometime, dissolving of oxgen and co2 in water for survival of aquatic plants and animals, disappearance of chalk from blackboard when leave uncleaned for 15 days.
The common unit of measuring temperature is degree celsius and the SI unit of measuring temperature is Kelvin 0 degree = 273 kelvin / kelvin scale temp. = celsius scale temp. + 273 melting point of ice = 0 degree /273k and boiling point of water is 100 degrees/ 373k.
Change of state from one form to other can be done by- 1.Changing the temperature 2.Changing the pressure. Effect of change in temperature : The process of changing solid to liquid by heating is called melting/ fusion. The temp. at which this happen at atmospheric pressure is called melting point of that substance. This happens due to weakening of attraction forces due to high kinetic energy in particles.
The process in which a liquid substances changes into gas rapidly on heating is called boiling. The temp. at which this takes place at atm pressure is called its boiling point. The process of changing a gas to a liquid by cooling is called condensation. This happens as gas looses its kinetic energy and particles come closer. When liquid changes to solid by cooling it is called freezing.
LATENT HEAT OF FUSION: The latent heat of fusion of a solid is quantity of heat in joules required to convert 1kg of solid to liquid without any change in temperature. It is 3.34*10^5 joules per kg. Heat energy is used up in changing the state by overcoming the force of attraction between the particles so the temp. remain the same even after supply of energy, further heating increases the kinetic energy rising temp. Ice at o degree is more effective in cooling than water at same temp. as for melting each kg of ice takes latent heat from substance whereas water do not have any such latent heat. When solid melts it absorbs heat from liquid also when liquid freeze to form a solid an equal amount of heat is given out.
Latent heat of vaporization: It is the quantity of heat in joules required to convert 1 kg of liquid to vapors/gas without change in temp. The temp. don’t rises due to overcoming force of attraction. When water changes to steam it absorbs latent heat, when steam condenses to form water an equal amount of latent heat is given out therefore burns caused by steam is much severe than boiling water as steam contains more heat than water.
Sublimation- the changing of solid directly into vapors on heating and gas on solid by cooling; substances = ammonium chloride, iodine, camphor, naphthalene ,etc. solid co2(dry ice) sublimes to form liquid co2.
Effect of change of pressure- Gases can be liquefied by applying pressure and lowering temp. Dry ice is extremely cold substance it is used as deep freeze to keep food and ice cream cold. Solid co2 changes to Gas by decrease in pressure and higher atmospheric temp. So it is always kept under high pressure.
Genetic engineering also known as genetic modification is the direct manipulation of DNA to modify an organism’s traits mainly observable physical properties in a specific way. Scientists utilize it to improve or change the features of an individual organism. It may be used to treat anything from a virus to a sheep. For example, genetic engineering can be utilized to create plants with better nutritional value or that can withstand pesticide treatment. It has also been used in animals to create sheep which would generate a therapeutic protein in their milk that can be used to cure cystic fibrosis, as well as worms that glow in the dark to help scientists understand more about illnesses like Alzheimer’s.
Firstly, if we look at the history, it was created to aid in the prevention of disease transmission. With the advent of genetic engineering, scientists may now alter the way genomes are built to eliminate illnesses caused by genetic mutation. today, genetic engineering is utilized to treat diseases including cystic fibrosis, diabetes, and a variety of others.
Genetic engineering also helps in detecting the problems even before the child is born which in turns help in curing the illness and diseases in unborn children. Humans aren’t the only ones that benefit from genetic modification. We can use genetic engineering to create foods that can endure extreme temperatures such as very hot or very cold while also providing all of the nutrients that people and animals require to thrive. Animals and plants can have their development rates genetically altered to mature more quickly. In order to improve productivity od diary or meat or even wool, animals can potentially be genetically changed.
However, with advantages comes disadvantages as well. Thus, Allowing scientists to tear down boundaries that should maybe be left alone has a lot of drawbacks.
Many religions, after all, think that genetic engineering is equivalent to playing God, and ban it from being used on their children, for example. Aside from religious issues, there are a variety of ethical concerns such as longer life expectancy is already generating societal difficulties throughout the world, so intentionally extending everyone’s life on Earth might lead to much more problems into the future, ones that we can’t possibly anticipate. Genetic engineering can also lead to genetic defects which scientists really can’t foresee because human body is a complex structure.
Furthermore, Genetic engineering aids in the resolution of a problem by introducing genes to the organism that will assist it in combating the issue. This can have unfavourable consequences. A plant, for example, may be engineered to require less water, but this would make it intolerant of direct sunshine. Also, nature being a complicated web of interconnections, many side effects can be caused as a consequence of using genetically modified genes.
Therefore, In a world where genetic engineering is advancing at a breakneck pace, the dangers of going too far with it are a constant source of concern because no one can’t really anticipate what consequence will it create and where it will lead us. Changing creatures’ DNA has definitely raised a few heads. It could work wonderfully, but who knows whether interacting with nature is truly safe. As a result, it appears that genetic engineering is both a mixed blessing, as we stand to gain as well as lose by furthering this field of study.
When COVID-19 cases are increasing rapidly, Kerala has been put on high alert after the Zika virus outbreak
Grappling with a heavy caseload of Covid-19, Kerala has been put on high alert after the Zika virus outbreak. The Union government has rushed a team of experts to the state. At least 14 cases have been confirmed, said the state health ministry. A 24-year-old pregnant woman from Parasala in Thiruvananthapuram district was the first to test positive for Zika on Thursday, apart from her later 13 others were added to the list. Of the 19 samples sent to the National Institute of Virology (NIV) in Pune, 14 were found to be positive. Health minister Veena George later called an emergency meeting and an action plan was formulated. “We have started a vigorous vector control programme and the whole state was alerted. We are monitoring the situation closely and more testing labs will be opened,” said the minister, adding that there is no need for panic. The woman has no travel history and it is suspected that she was infected locally. Health officials said samples were sent to the NIV last week and the mother gave birth normally on July 7, both mother and child are stable. At least 60 other samples were sent to Pune and health authorities are busy making arrangements to test samples at the NIV regional center in Alappuzha. A surveillance team, vector control experts and entomology units visited the area where the woman lived and took precautions to prevent further spread. The health authorities hoped that further spread could be contained with an utmost vigil as all the patients lived not far from the house of the initial patient. Some of the affected are health workers. “They were all working at a private hospital. We will check their travel history and take immediate action,” the minister said. In Delhi, health joint secretary Lav Agarwal said a 6-member expert team has been rushed to help the state. “The six-member team includes vector-borne disease experts and doctors from the All India Institute of Medical Sciences,” he said. The virus is mostly spread by mosquitoes, though it can also be sexually transmitted, said experts. However, few people die from Zika and only one in five develops symptoms. The symptoms of the disease, first identified in monkeys in Uganda in 1947 and the first human reported from Nigeria in 1954, include joint pain, fever and headache. In May 2015, it was reported in Brazil and spread rapidly. It leads to shrunken brain in children and a rare auto-immune disease called Guillain-Barre syndrome, experts said, adding it was first reported in the country in Gujarat in 2017. Meanwhile, the state’s Covid-19 cases remain quite high. On Friday, Kerala reported 13,563 new cases of Covid-19 with a high test positivity rate of 10.4 per cent. For almost a month the state has been reporting the highest number of cases in the country. It also reported 130 deaths taking the toll to 14,380.
Centennial Light is the longest-running electric light bulb on record. It has been running continuously since 1901 and it has never been switched off. It is located in Fire Station 6 in Livermore, California. The ordinary dim light bulb looks like any other bulb and there is also a camera that live-streams the light bulb onto the internet.
Link for the official website and live webcam of the light bulb.
It was manufactured in the late 1890s by the Shelby Electric Company, of Ohio, using a design by the French-American inventor Adolphe Chaillet. It has operated for over 100 years with very few interruptions. In 2011, it passed a milestone: One million hours of near-continuous operation. In 2015 it was recognized by Guinness World Records as the world’s longest-burning bulb.
The 60-watt bulb uses a carbon filament. One of the reasons for its longevity is that it seems to have an incredibly durable vacuum seal. There have been some researches done on bulbs manufactured by Shelby Electric Company of that era. But no one really exactly knows how these eternal bulbs were made as they were experimenting with various but the company was experimenting with a variety of designs at the time.
The electric model was quite different when first homes in The U.S had electricity. The servicing was the responsibility of the electric companies and customers would purchase entire electrical systems manufactured by a regional electricity supplier. The companies would also take care of the installation and servicing of any burned out electric bulbs would be replaced for free.
It made more logic for the suppliers to manufacture bulbs that would last longer and would burn out as least as possible. But this business model was later replaced and homeowners were responsible to change the light bulbs. It was soon realized that it would be more profitable to make cheaper bulbs that burned out faster. Since the mid-1900s goods were manufactured with a pre-determined expiry date aimed at forcing consumers into repeat purchases. This phenomenon has only been exacerbated in recent years. This can also be called planned obsolescence.
In 1924, the life span of the light bulbs was at least 2,500 hours. Phoebus cartel was formed in 1925 in Geneva. It comprised of the major incandescent light bulbs manufacturers at that time: Osram, General Electric, Associated Electrical Industries, and Philips. The cartel had directed their engineers to cut the life of the bulbs to 1,000 hours, which the engineers did by adjusting voltage and current. The cartel was intended to operate for 30 years but it was starting to fall apart in the early 1930s after General Electric patents expired and as the cartel faced competition from non-member manufactures from other regions. The cartel ceased its operations after the outbreak of World War II in 1939.
Planned obsolescence is a very critical area it does not only decrease the lifespan of the good but as a consequence, it is also wasteful. It is not sustainable for the environment and the main focus of this practice is to maximize profits. It also reminds us that technological innovations are often not accessible in favor of corporate greed.
Sensors can be found all over the world. They may be found in our homes and offices, as well as retail malls and hospitals. They’re built into cellphones and play a key role in the Internet of Things (IoT). Sensors are the IoT devices’ front-end. In the Internet of Things, they actually mean “things.” Their primary responsibility is to collect required data from the environment and transmit it to databases or processing systems. Because they are the primary front-end interfaces in a vast network of other devices, they must be individually identifiable by their IP address. Sensors gather real-time data and can be self-contained or controlled by the user. Sensors are vital to the success of many modern enterprises. They can alert you to possible issues before they turn into major issues, allowing firms to undertake preventative maintenance and avoid costly downtime.
Gas sensors, water quality sensors, moisture sensors, and other sensors are examples of sensors.
Processors
Processors, like computers and other electrical systems, are the IoT system’s brain. Processors’ primary function is to turn raw data acquired by sensors into useful information and knowledge. In short, its role is to provide intelligence to the data. Applications can readily manage processors, and one of their most essential functions is data security. They are in charge of data encryption and decryption.
Processors built within microcontrollers, embedded hardware devices, and other devices may process data.
Gateways
A gateway for the Internet of Things (IoT) is a physical hardware or software program that connects the cloud to controllers, sensors, and intelligent devices. An IoT gateway, which can be either a specialized hardware appliance or a software application, is responsible for transferring data between IoT devices and the cloud. An intelligent gateway or control tier is another name for an IoT gateway. The primary function of gateways is to route processed data to appropriate databases or network storage for suitable use. In other terms, the gateway facilitates data transmission. IoT systems require communication and network access to function.
LAN, WAN, PAN, and other gateways are examples.
Applications
Another end of an IoT system is applications. Because it is adaptable to practically any technology capable of giving useful information about its own operation, the execution of an activity, and even the environmental conditions that we need to monitor and manage at a distance, IoT technologies have a wide range of applications. Many organizations from many industries are now using this technology to simplify, enhance, automate, and control various operations. Applications make good use of all acquired data and offer users an interface through which they may interact with it. These apps might be cloud-based and are in charge of rendering the data acquired. Applications are controlled by the user and serve as delivery points for certain services.
Smart home apps, security system control apps, industrial control hub apps, and so on are examples of applications.
The raw data collected by the sensors is transmitted to embedded processors in the IoT Building Blocks. Processors convert raw data into useful information, which they subsequently send to remote cloud-based apps or database systems via gateway devices. The data is subsequently transferred to the apps for effective application and data analysis through big data.
You must be logged in to post a comment.