General issues on Environmental ecology

The environment plays a significant role to support life on earth. But there are some issues that are causing damages to life and the ecosystem of the earth. It is related to the not only environment but with everyone that lives on the planet. Besides, its main source is pollution, global warming, greenhouse gas, and many others. The everyday activities of human are constantly degrading the quality of the environment which ultimately results in the loss of survival condition from the earth.There are hundreds of issue that causing damage to the environment. But in this, we are going to discuss the main causes of environmental issues because they are very dangerous to life and the ecosystem.

Pollution – It is one of the main causes of an environmental issue because it poisons the air, water, soil, and noise. As we know that in the past few decades the numbers of industries have rapidly increased. Moreover, these industries discharge their untreated waste into the water bodies, on soil, and in air. Most of these wastes contain harmful and poisonous materials that spread very easily because of the movement of water bodies and wind. Greenhouse Gases – These are the gases which are responsible for the increase in the temperature of the earth surface. This gases directly relates to air pollution because of the pollution produced by the vehicle and factories which contains a toxic chemical that harms the life and environment of earth. Climate Changes – Due to environmental issue the climate is changing rapidly and things like smog, acid rains are getting common. Also, the number of natural calamities is also increasing and almost every year there is flood, famine, drought, landslides, earthquakes, and many more calamities are increasing.

Development recognises that social, economic and environmental issues are interconnected, and that decisions must incorporate each of these aspects if there are to be good decisions in the longer term.For sustainable development, accurate environment forecasts and warnings with effective information on pollution which are essential for planning and for ensuring safe and environmentally sound socio-economic activities should be made known.


THE EARTH IS WHAT WE
ALL HAVE IN COMMAN

History of India & Indian National Movement.

Early times the Indian subcontinent appears to have provided an attractive habitat for human occupation. Toward the south it is effectively sheltered by wide expanses of ocean, which tended to isolate it culturally in ancient times, while to the north it is protected by the massive ranges of the Himalayas, which also sheltered it from the Arctic winds and the air currents of Central Asia. Only in the northwest and northeast is there easier access by land, and it was through those two sectors that most of the early contacts with the outside world took place.

Within the framework of hills and mountains represented by the Indo-Iranian borderlands on the west, the Indo-Myanmar borderlands in the east, and the Himalayas to the north, the subcontinent may in broadest terms be divided into two major divisions: in the north, the basins of the Indus and Ganges (Ganga) rivers (the Indo-Gangetic Plain) and, to the south, the block of Archean rocks that forms the Deccan plateau region. The expansive alluvial plain of the river basins provided the environment and focus for the rise of two great phases of city life: the civilization of the Indus valley, known as the Indus civilization, during the 3rd millennium BCE; and, during the 1st millennium BCE, that of the Ganges. To the south of this zone, and separating it from the peninsula proper, is a belt of hills and forests, running generally from west to east and to this day largely inhabited by tribal people. This belt has played mainly a negative role throughout Indian history in that it remained relatively thinly populated and did not form the focal point of any of the principal regional cultural developments of South Asia. However, it is traversed by various routes linking the more-attractive areas north and south of it. The Narmada (Narbada) River flows through this belt toward the west, mostly along the Vindhya Range, which has long been regarded as the symbolic boundary between northern and southern India.

India’s movement for Independence occurred in stages elicit by the inflexibility of the Britishers and in various instances, their violent responses to non-violent protests. It was understood that the British were controlling the resources of India and the lives of its people, and as far as this control was ended India could not be for Indians.

On 28 December 1885 Indian National Congress (INC) was founded on the premises of Gokuldas Tejpal Sanskrit School at Bombay. It was presided over by W.C Banerjee and attended by 72 delegates. A.O Hume played an instrumental role in the foundation of INC with an aim to provide Safety Valve to the British Government.
A.O Hume served as the first General Secretary of INC.
The real Aim of Congress is to train the Indian youth in political agitation and to organise or to create public opinion in the country. For this, they use the method of an annual session where they discuss the problem and passed the resolution.
The first or early phase of Indian Nationalism is also termed as Moderate Phase (1885-1905). Moderate leaders were W.C Banerjee, Gopal Krishna Gokhale, R.C Dutt, Ferozeshah Mehta, George Yule, etc.
Moderates have full faith in British Government and adopted the PPP path i.e. Protest, Prayer, and Petition.
Due to disillusionment from Moderates’ methods of work, extremism began to develop within the congress after 1892. The Extremist leaders were Lala Lajpat Rai, Bal Gangadhar Tilak, Bipin Chandra Pal, and Aurobindo Ghosh. Instead of the PPP path, they emphasise on self-reliance, constructive work, and swadeshi.
With the announcement of the Partition of Bengal (1905) by Lord Curzon for administrative convenience, Swadeshi and Boycott resolution was passed in 1905.


ONE INDIVIDUAL MAY DIE; BUT THAT IDEA WILL, AFTER HIS DEATH, INCARNATE ITSELF IN A THOUSAND LIVES.

-Netaji Subhash Chandra Bose

Internet Protocol

What is an IP address?

An IP address abbreviation of Internet Protocol address, it is an address that is provided by the Internet Service Provider to the user, it is just like a postal address code that is pin code to find the location or place where to send the message.  An IP address is a unique group of number what are separated by the period (.), it varies from 0 to 255, and   every device has a separate and unique IP address that is assigned by the specific Internet Service Provider (ISP) to identify which particular device is communicating with them and accessing the internet from there.

If you want to access internet from you device which may be your Android, I phone, Computer the service provider assigned them a particular, unique  address  that is help them to communicate send, receive information from the right person without any misunderstanding, mistake the message is pass to the authentic person to whom it has to send.  This problem is solved by the IP address, in olden days; we have postal address to send the message/letter to the person, the message that has to be sent with the help of the address which may be his house number, city, town, postal code.  The sender will write the address on the top of the letter envelope so that it will be delivery to the right person.  If the person connected his device to internet provide by the hotel, the hotel‘s Internet Service Provider will assign an IP address to the device.

Types of IP addresses

There are different types of IP based on different categories, types.

Consumer IP addresses

A Consumer IP addresses is the individual IP addresses of a customer who connects his/her  device to a public or private  network.  A consumer connects his device through internet from his Internet Service Provider, or from the Wi-Fi.  In these days the Consumer has many electronic gadgets which he connects to his router that transfer the data from the Internet Service Provider.

Private IP addresses

A  Private IP addresses are a secure one that is connected Private Network and every devices that is connected to this Private Network is assigned a unique IP address that is assigned by the Internet Service Provider.  All Mobile devices, Computer, and Internet of Things that are connected to this private network are assigned a unique string number to the devices.

Public IP addresses

A Public IP addresses is the main address that is related to your network, as stated above that the IP address are assigned by the Internet Service Provider, the Public IP address is also assigned by the Internet Service Provider, The Internet Service Provider has a large amount of IP addresses that are stored and assigned to the customer. The public IP address is the address that  devices that are outside the network use to identify the network.

The Public IP addresses are further classified into two types they are:

  1. Dynamic
  2. Static

Dynamic IP addresses

                The Dynamic  IP address  are the IP address that changes very frequently, so the Internet  Service Providers  purchase a very huge amount of IP addresses , they assign it mechanically to the customer . This frequently changing the IP address helps the customer not to make the security actions. The frequently changing IP address won’t let the hacks to track or pool your data.

 Static IP addresses

The Static IP addresses is the contradictory to the Dynamic IP address, it remain fixed. The IP address remains fixed when it is assigned by the Internet Service Provider.  The mostly many person and business man don’t   choose static because it is risk of getting easily track, but most business which are trying host her own website server choose Static IP address so it will easier  for the customer to find them.

                The IP address can be protect by 2 ways that are using proxy and the other one is use of Virtual Private Network.   A proxy server acts as a intermediary between the internet server and your internet service providers, when you visit any website it will show the proxy IP address not yours. 

Where to find IP address is Device?

                The IP address set up in every device that is connected to the Internet, but the steps or direction is different in different devices. Some of device direction is given below:

In Window or any other Personal Computer

  1. Go to the Start Menu
  2. Type  ‘Run’ in the Search bar
  3. A Run Tab pops up
  4. Type  ‘cmd’
  5. A black screen pops up
  6. Type ‘ipconfig’
  7. Your  IP address is found.

In Android Mobile

  1. Go to the Settings
  2. Tap on Network and Internet
  3. Tap on Wi-Fi, it will show the IP address

Web 3.0

Previous versions of Internet Era

Web 1.0: The first version of the web was started with the development of the web browser in 1991. It consisted of static websites with content written by a few people and organizations. Other people can only read the content, they cannot comment or provide new information, so it is just one-way communication. It worked very well but had one big problem there was no way to make money off it. For instance, a Web 1.0 startup called Google had heavy traffic, but couldn’t encash it.

Web 2.0: The next version of the web, which is web 2.0 was started approximately from 2004. It allowed consumers to add content through comments, blogs etc. People began creating a lot of content on social media websites as well. So, people can read and write on this version of the web, which allowed two-way communication.

What is Web 3.0 ?

Any innovation starts with a vision. So, many people had different version on how the next version of the web should be. The majority of them wanted a web that ensured data privacy and free speech. The invention of Blockchain technology, which enables peer-to-peer online payment transfers without the interference of banks, gave hopes of creating the decentralized web, where user privacy and free speech are guaranteed. The latest technologies, such as blockchain technology, artificial intelligence, and the Internet of Things, are being used to create Web 3.0.

Web 3.0 is defined as a decentralized web, where content does not lie in the hands of big corporations. Instead, it uses peer-to-peer infrastructure, so the information cannot be censored by corporations or the government. So, it can ensure free speech.

However, the reality may or may not match the vision. It may change somewhat from the vision or take a whole different direction.

The vision of web 3.0

  • Web 3.0 will most likely be a decentralized internet. Now there are already so many Decentralized applications (dApps), which are based on blockchain technology to give more control to users over their data and finances.
  • As the data is not controlled by big companies, user privacy will be guaranteed.
  • The accuracy of the information may also be improved by making Artificial intelligence learn to distinguish between good and bad data. AI is already being used to accomplish this goal. Google, for example, uses Artificial Intelligence to delete millions of fake reviews.

  • Web 3.0 allows 3D graphics in apps. Big tech companies have already begun to invest metaverse – virtual environments. Some of the most popular metaverses include Decentraland, Sandbox, and CryptoVoxels. Metaverses are made possible with the help of Virtual Reality (VR) and Augmented Reality (AR) technologies. We may use our digital avatars to interact, shop, and play games in the virtual world. There, we can use cryptocurrencies for financial transactions.
  • Web 3.0 is already being included by several websites and apps. According to some experts, web 3.0 will not be able to totally replace web 2.0 in the near future. Instead, both will run simultaneously.

Challenges with Web 3.0

  • Vastness: The internet is huge and it contains billions of pages and the SNOMED CT medical terminology ontology alone includes 370,000 class names, and existing technology has not yet been able to eliminate all semantically duplicated terms.
  • Vagueness: User queries are not really specific and can be extremely vague at the best of times. Fuzzy logic is used to deal with vagueness.
  • Uncertainty: The internet deals with scores of uncertain values. For example, a patient might present a set of symptoms that correspond to many different distinct diagnoses each with a different probability. Probabilistic reasoning techniques are generally employed to address uncertainty.
  • Inconsistency: Inconsistent data can lead to logical contradiction and unpredictive analysis.

Conclusion

Web 3.0 is the next step in the internet’s evolution, and its foundations have already been set. According to current standards, web 3.0 will be a huge advance in network technology since it will be a hyper-intelligent network capable of understanding information like a human. Aside from the technological marvels proposed by web 3.0, it also proposes the application of certain ideas that will drastically alter existing mode of operation of today’s networks.  And we as the end-users will usher into a new era of networking, one that will further blur the lines between the physical and the digital space.

Metaverse

What is Metaverse?

In metaverse, people can interact with each other using virtual and augmented reality technologies. It will result in forming the shared virtual world.

Metaverse is considered web 3.0. The earlier version of the internet which consisted of web pages that provided information is termed web 1.0. The next version consisted of interactive web pages. Now, web 3.0 will be a result of assimilating virtual reality and augmented reality in web 2.0.

We can shop, play games, buy things and own places in the metaverse. Several companies are creating gaming metaverses. The game ‘Second Life,’ which was released in 2003, can be considered an early version of the metaverse.

Benefits of Metaverse

  • It will be quite beneficial to host meetings. Video conferencing has some drawbacks, such as the lack of a personal connection. Metaverse will make us feel like we are sitting at the same place by interacting through digital avatars.
  • It will also help people with special needs.
  • It can also be used to help people overcome phobias.
  • It is expected that virtual currencies in the metaverse will significantly influence the world economy. Decentralization will reduce the dependence on governments.

Challenges with Metaverse

  • Few companies may control the metaverse and hence power and influence may stay in the hands of a few people.
  • Government surveillance and control could be increased by collaborating with businesses.
  • Addictions to the internet and smartphones are becoming common. As a result, it’s possible that virtual world addiction will become the next huge concern. Furthermore, metaverse consists of entertainment, shopping, games and many other things that are addictive in nature.
  • Even in this modern era, not everyone has access to the internet. Many people are digital illiterates. Due to the digital divide, the benefits of metaverse will not be accessible to many.

Conclusion

Some people believe that the metaverse will be the internet’s future. Many businesses are investing in the development of the metaverse. It is important to ensure that no monopoly exists in the shared virtual environment.

New NASA Earth System Observatory to Help Address, Mitigate Climate Change

May 24, 2021

NASA will design a new set of Earth-focused missions to provide key information to guide efforts related to climate change, disaster mitigation, fighting forest fires, and improving real-time agricultural processes. With the Earth System Observatory, each satellite will be uniquely designed to complement the others, working in tandem to create a 3D, holistic view of Earth, from bedrock to atmosphere.



“I’ve seen firsthand the impact of hurricanes made more intense and destructive by climate change, like Maria and Irma. The Biden-Harris Administration’s response to climate change matches the magnitude of the threat: a whole of government, all hands-on-deck approach to meet this moment,” said NASA Administrator Sen. Bill Nelson. “Over the past three decades, much of what we’ve learned about the Earth’s changing climate is built on NASA satellite observations and research. NASA’s new Earth System Observatory will expand that work, providing the world with an unprecedented understanding of our Earth’s climate system, arming us with next-generation data critical to mitigating climate change, and protecting our communities in the face of natural disasters

Technological Determinism

Technological determinism is a reductionist theory that aims to provide a causative link between technology and a society’s nature. It tries to explain as to whom or what could have a controlling power in human affairs. The theory questions the degree to which human thought or action is influenced by technological factors.




The term ‘technological determinism’ was coined by Thorstein Veblen and this theory revolves around the proposition that technology in any given society defines its nature. Technology is viewed as the driving force of culture in a society and it determines its course of history.

Karl Marx believed that technological progress lead to newer ways of production in a society and this ultimately influenced the cultural, political and economic aspects of a society, thereby inevitably changing society itself. He explained this statement with the example of how a feudal society that used a hand mill slowly changed into an industrial capitalist society with the introduction of the steam mill.

WINNER’S HYPOTHESES

Langdon Winner provided two hypotheses for this theory:

The technology of a given society is a fundamental influencer of the various ways in which a society exists
Changes in technology are the primary and most important source that leads to change in the society
An offshoot of the above hypotheses which is not as extreme is the belief that technology influences the various choices that we make and therefore a changed society can be traced back to changed technologies.

Technological determinism manifests itself at various levels initially it starts with the introduction of newer technologies introduces various changes and at times these changes can also lead to a loss of existing knowledge as well. For example, the introduction of newer agricultural tools and methods has seen the gradual loss of knowledge of traditional means of farming. Therefore technology is also influencing the level of knowledge in a society.

Examples of Technological determinism

History shows us numerous examples to explain why technology is considered to be determining the society that we live in. The invention of the gun changed how disputes were sorted out and changed the face of combat. A gun required minimum effort and skill to be used successfully and could be used from a safe distance. This when compared to how earlier wars were fought with swords and archery lead to a radical change in the weapons used in war. Today with the discovery .

Today with the discovery of nuclear energy, future wars will be fought with nuclear arsenal. Each new discovery causes a transition to a different society. The discovery of steam power let to the development of the industrial society and the introduction of computers has led to the dawn of the information age.

Technological Drift

Winner believed that changes in technology sometimes had unintended or unexpected results and effects as well. Winner called this phenomenon as ‘technological drift’ where people start drifting more and more among a sea of unpredictable and uncertain consequences. According to Winner, technology is not the slave of the human being but rather humans are slaves to technology as they are forced to adapt to the technological environment that surrounds them.

Forms of Technological Determinism

An alternative weaker view of technological determinism says that technology is serving a mediating function because despite it leading to changes in culture, it is actually controlled by human beings. When control of technology slowly reduces from being in the hands of few human beings, it passes completely into the control of technology itself. This view of humans having no control is referred to as ‘autonomous technological determinism.’

Technological Determinism and Media

New media are not only an addition to existing media, they are also new technologies and therefore do have a deterministic factor as well. Marshall McLuhan made a famous statement that “the medium is the message.” This means that the medium used to communicate influences the mind of the receiver. The introduction of news print, television and the internet have all shown how technological advances have an impact on the society in which we live in.

Criticism of Technological Determinism

A critique of technological determinism is that technology never forces itself on members of the society. Man creates technology and chooses to use them. He invents television and chooses to view it. There is no imposition on the part of the technology to be used rather technology requires people to participate or involve themselves at some point or another to use a car or a microwave. The choice of using technology and experiencing its effects therefore lies in the hand of a human being.

Written by: Ananya Kaushal

The incredible journey of Elon Musk’s SpaceX – The engineering masterpiece

The Falcon super heavy launch vehicle was designed to transport people, spaceships, and various cargos into space. Such a powerful unit wasn’t created instantly and it also had its predecessors. The history of the Falcon family of vehicles began with the creation of the Falcon 1- a lightweight launch vehicle with a length of 21.3 meters and a diameter of 1.7 meters and a launch mass of 27.6 tones; the rocket could carry 420 kilograms or 926 pounds of payload on board. It became the first private device that was able to bring cargo into low earth orbit. Construction of the Falcon 1 of only two stages, the first of them consisted of a supporting element with fuel tanks, an engine and a parachute system. They chose kerosene as the fuel and liquid oxygen became its oxidizing agent.

The falcon heavy side boosters landings -SpaceX

The second stage also contains fuel tanks and an engine; though the latter had less thrust compared to the one in the first stage despite the huge launch cost $7.9 million. Totally five attempts were made to send the Falcon 1 beyond the of our planet nut not all of them were successful. During the debut launch of the rocket, a fire started in the first stage engine; this led to a loss of pressure which caused the engine to shut down in the 34th second of flight. The second attempt to start the Falcon 1 incurred a problem with the fuel system of the second stage fuels stopped flowing into its engine at 474 second of flight it shut down as well. The third time of the Falcon 1 went on a flight, it wasn’t alone of the serious cargo the rocket carried onboard the trailblazer satellites and to NASA micro-satellites. In phase one with the first stage he flight went normally but when the time came to separate the stages, the first hit the second when it started engine, so the second stage couldn’t continue its flight.

 The fourth and fifth launches showed good results but that wasn’t enough. The main problem with Falcon 1 was low demand due to its low payload abilities. For this reason, they designed Falcon 9; this device can carry on onboard 23 tons of cargo. It’s also a two stage launch vehicle and uses kerosene and l liquid oxygen as fuel. The device is currently in operation and the cost of its launch is equal to $62 million. The first stage of the rocket is reusable; it can return to earth and can be used again. The Falcon 9 is designed to not only launch commercial communication satellites but also to deliver dragon 1 to the ISS. Dragon 1 can carry a six ton payload from the earth, this drone supplies the ISS with everything they needs and it also takes goods back.

The prototype of SpaceX starship had its first free flight on July 25, 2019

The dragon 2 is designed to deliver a crew of four people to the ISS and back to earth. Now there is an ultra heavy launch vehicle with a payload capacity of almost 64 tones. It is the most powerful and heavier device called the Falcon heavy. This rocket was first launched on February 6th 2018 and the test was successful. The rocket sent Elon Musk’s car into space- a red Tesla Roadster. After this debut subsequent launches were also conducted without problem. The launch cost is estimated to $150 million.

The first stage of the Falcon heavy consists f three parts. There are three blocks contain 27 incredibly powerful engines in nine each one. The thrust created when takeoff is comparable to 18 Boeing 747s at full power. The second stage is equipped with a single engine. It is planned that the device would be used for missions to the moon and mars. Currently, SpaceX working on the star-ship manned spacecraft.  According to its creators, this device will be much larger and heavier than all of the company’s existing rockets. This device will able to deliver cargo into space weighing more than a hundred tons. The launch of star-ship into pace is planned for 2022 to mars with a payload. Who knows, one of the mankind’s largest dreams may come true within the next year.

“When something is important enough, you do it even if the odds are not in your favor.”Elon Musk

Stephen Hawking’s final theory of black holes -The Hawking radiation

When a massive star dies, it leaves a small but dense remnant core in its wake. If the mass of the core is more than 3 times the mass of the sun, the force of gravity overwhelms all other forces and a black hole is formed. Imagine the size of a star is 10 times more massive than our sun being squeezed into a sphere with a diameter equal to the size of New York City. The result is a celestial object whose gravitational field is so strong that nothing, not even light can escape it. The history of black holes was started with the father of all physics, Isaac Newton. In 1687, Newton gave the first description of gravity in his publication, Principia mathematica, that would change the world.

Then 100 years later, John Michelle proposed the idea that there could exist a structure that would be massive enough and not even light would be able to escape its gravitational pull. In 1796, the famous French scientist Pierre-Simon Laplace made an important prediction about the nature of black holes. He suggested that because even the speed of light was slower than the escape velocity of black hole, the massive objects would be invisible. In 1915, Albert Einstein changed physics forever by publishing his theory of general relativity. In this theory, he explained space time curvature and gave a mathematical description of a black hole. And in 1964, john wheeler gave these objects the name, the black hole.

The Gargantua in Interstellar is an incredibly close representation of an actual black hole

In classical physics, the mass of a black hole cannot decrease; it can either stay the same or get larger, because nothing can escape a black hole. If mass and energy are added to a black hole, then its radius and surface area also should get bigger. For a black hole, the radius is called the Schwarzschild radius. The second law of thermodynamics states that, an entropy of a closed system is always increases or remains the same. In 1974, Stephen hawking– an English theoretical physicists and cosmologist, proposed a groundbreaking theory regarding a special kind of radiation, which later became known as hawking radiation. So hawking postulated an analogous theorem for black holes called the second law of black hole mechanics that in any natural process, the surface area of the event horizon of a black hole always increase, or remains constant. It never decreases. In thermodynamics, black bodies doesn’t transmit or reflect any radiation, it only absorbs radiation.

When Stephen hawking saw these ideas, he found the idea of shining black holes to be preposterous.  But when he applied the laws of quantum mechanics to general relativity, he found the opposite to be true. He realized that stuff can come out near the event horizon. In 1974, he published a paper where outlined a mechanism for this shine. This is based on the Heisenberg uncertainty Principe. According to the principle of quantum mechanisms, for every particle throughout the universe, there exists an antiparticle. These particles always exist in pairs, and continually pop in and out of existence everywhere in the universe. Typically, these particles don’t last long because as soon as possible and its antiparticle pop into existence, they annihilate each other and cease to exist almost immediately after their creation.

In the event horizon that the point which nothing can escape its gravity. If a virtual particle pair blip into existence very close to the event horizon of a black hole, one of the particles could fall into the black hole while the other escapes. The one that falls into the black hole effectively has negative energy, which is, in Layman’s terms, akin to subtracting energy from the black hole, or taking mass away from the black hole. The other particle of the pair that escapes the black hole has positive energy, and is referred to as hawking radiation.

The first-ever image of a black hole by the Event Horizon Telescope (EHT), 2019

Due to the presence of hawking radiation, a black hole continues to loss mass and continues shrinking until the point where it loses all its mass and evaporates. It is not clearly established what an evaporating black hole would actually look like. The hawking radiation itself would contain highly energetic particles, antiparticles and gamma rays. Such radiation is invisible to the naked eye, so an evaporating black hole might not look like anything at all. It also possible that hawking radiation might power a hadronic fireball, which could degrade the radiation into gamma rays and particles of less extreme energy, which would make an evaporating black hoe visible. Scientists and cosmologists still don’t completely understand how quantum mechanics explains gravity, but hawking radiation continues to inspire research and provide clues into the nature of gravity and how it relates to other forces of nature.

CYBER CRIME CASE STUDY IN INDIA

Computer Crime Cyber crime encompasses any criminal act dealing with computers and networks (called hacking).Additionally, cyber crime also includes traditional crimes conducted through the internet. For example; The computer may be used as a tool in the following kinds of activity- financial crimes, sale of illegal articles, pornography, online gambling, intellectual property crime, e-mail spoofing, forgery, cyber defamation, cyber stalking.The computer may however be target for unlawful acts in the following cases- unauthorized access to computer/ computer system/ computer networks, theft of information contained in the electronic form, e-mail bombing, Trojan attacks, internet time thefts, theft of computer system, physically damaging the computer system

Cyber Law is the law governing cyberspace. Cyberspace is a wide term and includes computers, networks,software, data storage devices (such as hard disks, USB disks), the Internet, websites, emails and even electronic devices such as cell phones, ATM machines etc.

Computer crimes encompass a broad range of potentially illegal activities. Generally, however, it may be divided into one of two types of categories

(1) Crimes that target computer networks or devices directly; Examples – Malware and malicious code, Denial-of-service attacks and Computing viruses.

(2) Crimes facilitated by computer networks or devices, the primary target of which is independent of the computer network or device. Examples – Cyber stalking, Fraud and identity theft, Phishing scams and Information warfare.

CASE STUDIES

Case no:1 Hosting Obscene Profiles (Tamil Nadu)

The case is about the hosting obscene profiles. This case has solved by the investigation team in Tamil Nadu. The complainant was a girl and the suspect was her college mate. In this case the suspect will create some fake profile of the complainant and put in some dating website. He did this as a revenge for not accepting his marriage proposal. So this is the background of the case.

Investigation Process

Let’s get into the investigation process. As per the complaint of the girls the investigators started investigation and analyze the webpage where her profile and details. And they log in to that fake profile by determining its credentials, and they find out from where these profiles were created by using access log. They identified 2 IP addresses, and also identified the ISP. From that ISP detail they determine that those details are uploaded from a café. So the investigators went to that café and from the register and determine suspect name. Then he got arrested and examining his SIM the investigators found number of the complainant.

Conclusion

The suspect was convicted of the crime, and he sentenced to two years of imprisonment as well as fine.

Case no:2 Illegal money transfer (Maharashtra)

ThIS case is about an illegal money transfer. This case is happened in Maharashtra. The accused in this case is a person who is worked in a BPO. He is handling the business of a multinational bank. So, he had used some confidential information of the banks customers and transferred huge sum of money from the accounts.

Investigation Process

Let’s see the investigation process of the case. As per the complaint received from the frim they analysed and studied the systems of the firm to determine the source of data theft. During the investigation the system server logs of BPO were collected, and they find that the illegal transfer were made by tracing the IP address to the internet service provider and it is ultimately through cyber café and they also found that they made illegal transfer by using swift codes. Almost has been  The registers made in cyber café assisted in identifying the accused in the case. Almost 17 accused were arrested.

Conclusion

Trail for this case is not completed, its pending trial in the court.

Case no:3 Creating Fake Profile (Andhra Pradesh)

The next case is of creating fake profile. This case is happened in Andhra Pradesh. The complainant received obscene email from unknown email IDs. The suspect also noticed that obscene profiles and pictures are posted in matrimonial sites.

Investigation Process

The investigators collect the original email of the suspect and determine its IP address. From the IP address he could confirm the internet service provider, and its leads the investigating officer to the accused house. Then they search the accused house and seized a desktop computer and a handicam. By analysing and examining the desktop computer and handicam they find the obscene email and they find an identical copy of the uploaded photos from the handicam. The accused was the divorced husband of the suspect.

Conclusion

Based on the evidence collected from the handicam and desktop computer charge sheet has been filed against accused and case is currently pending trial.

Hacking is a widespread crime nowadays due to the rapid development of the computer technologies. In order to protect from hacking there are numerous brand new technologies which are updated every day, but very often it is difficult to stand the hacker’s attack effectively. With some of these case studies, one is expected to learn about the cause and effect of hacking and then evaluate the whole impact of the hacker on the individual or the organization.

The history of surgery and its advancements today

Treating illness by using tools to remove or manipulate parts of the human body is an old idea. Even the minor operations carried high risks, but that doesn’t mean all early surgery failed. Indian doctors, at the beginning centuries before the birth of Christ, successfully removed tumors and performed amputations and other operations. They developed dozens of metal tools, relied on alcohol to dull the patient, and controlled bleeding with hot oil and tar. The 20th century brought even more radical change through technology. Advances in fiber optic technology and the miniaturization of video equipment have revolutionized surgery. The laparoscopy is the James Bond like gadget of the surgeon’s repertoire of instruments. Only a small incision through the patient’s abdominal wall is made into which the surgeon puffs carbon dioxide to open up the passage.

 Using a laparoscope, a visual assessment and diagnosis, and even surgery causes less physiological damage, reduces patient’s pain and speeds their recovery leading to shorter hospital stays. In the early 1900s, Germany’s George Kelling developed a surgical technique in which he injected air into the abdominal cavity and inserted a cytoscope – a tube like viewing scope to assess the patient’s innards. In late 1901, he began experimenting and successfully peered into a dog’s abdominal cavity using the technique. Without cameras, laparoscopy’s use limited to diagnostic procedures carried out by gynecologists and gastroenterologists.

By the 1980s, improvements in miniature video devices and fiber optics inspired surgeons to embrace minimally invasive surgery. In 1996, the first live broadcast of a laparoscopy took place. A year later, Dr. J. Himpens used a computer controlled robotic system to aid in laparoscopy. This type of surgery is now used for gallbladder removal as well as for the diagnosis and surgeries of fertility disorder, cancer, and hernias.

Hypothermia is a drop in body temperature significantly below normal can be life threatening, as in the case of overexposure to severe wintry conditions. But in some cases, like that of Kevin Everett of the buffalo bills, hypothermia can be lifesaver. Everett fell to the ground with a potentially crippling spinal cord injury during a 2007 football game. Doctors treating him on the field immediately injected his body with a cooling fluid. At the hospital, they inserted a cooling catheter to lower his body temperature by roughly five degrees, at the same time proceeding with surgery to fix his fractured spine. Despite fears that he would be paralyzed, Everett has regained his ability to walk, and advocates of therapeutic hypothermia feel his lowered body temperature may have made the difference.

Robotic surgery allows surgeons to perform complex rectal cancer surgery

Therapeutic hypothermia is still a controversial procedure. The side effects of excessive cooling include heart problems, blood clotting, and increased infection risk. On the other hand, supporters claim, it slows down cell damage, swelling, and other destructive processes well enough that it can mean successful surgery after a catastrophic injury. Surgical lasers can generate heat up to 10,000°F on a pinhead size spot, sealing blood vessels and sterilizing. Surgical robots and virtual computer technology are changing medical practice. Robotic surgical tools increase precision. In 1998, heart surgeons at Paris’s Broussais hospital performed the first robotic surgery. New technology allows an enhanced views and precise control of instruments.

“After a complex laparoscopic operation, the 65-year-old patient was home in time for dinner”. – Elisa Birnbaum, surgeon

Can we fix our Ozone layer? The Montreal protocol

Imagine that one day our Ozone layer was disappeared. What will happen? How long can we survive without it?  The Ozone layer is a region of Earth’s atmosphere that contains a high concentration of Ozone (O3). Ozone is a highly reactive gas composed of three oxygen atoms. It is found in the lower portion of Earth’s atmosphere. It absorbs 97 to 99 percent of the Sun’s ultraviolet rays. Direct exposure to UV rays can cause serious skin problems including sun burn, skin cancer, premature ageing of the skin, solar elastosis. It can also cause eye problems and can ruin our immune system.

  The depletion of ozone layer was first observed by a Dutch chemist Paul crutzen. He described the Ozone depletion by demonstrating the reaction of nitrogen oxide with oxygen atoms which slowing the creation of Ozone (O3). Later in 1974, American chemists Mario Molina and F. Sherwood Rowland observed that chlorofluorocarbon (CFC) molecules emitted by man-made machines like refrigerators, air conditioners and airplanes could be the major source of chlorine in the atmosphere. One chlorine atom can destroy 100,000 ozone molecules.

Not all chlorine molecules contribute to ozone layer depletion; chlorine from swimming pool, sea salt, industrial plants, and volcanoes does not reach the stratosphere. The ozone hole in Antarctica is one of the largest and deepest depletion which was discovered by the British scientists. This became worldwide headlines after that. According to NASA scientist Paul Newman, if this depletion continues in this rate our ozone layer can be likely disappeared in 2065. If that happens UV rays from sun directly reach earth and cause severe health issues, Humans can last 3 months and plants may die in 2 weeks because of heavy UV radiation. Thus Earth will become inhabitable.

 Fortunately in 1987, Montreal protocol was made that bans chlorofluorocarbon and other chemicals that cause ozone depletion. Surprisingly it works, researches made in 2018 tells that the ozone layer is repairing itself at a rate of 1% to 3% per decade since 2000. Still it will take at least 50 years for complete recovery. The greenhouse effect allows the short wave radiation of sunlight to pass through the atmosphere to earth’s surface but makes it difficult for heat in the form of long wave radiation to escape. This effect blankets the earth and keeps our planet at a reasonable temperature to support life. Earth radiated energy, of which about 90 percent is absorbed by atmospheric gases like water vapor, carbon dioxide, ozone, methane, nitrous oxide, and others. Absorbed energy is radiated back to the surface and warms earth’s lower atmosphere.

The gases have come to be called greenhouse gases because they hold in light and heat, just as a greenhouse does for the sake of the plants inside. Greenhouse gases are essential to life, not only at an appropriate balance point. These gases increased during the 20th century due to industrial activity and fossil fuel emissions. For example, the concentration of carbon dioxide I the atmosphere have recently been growing by about 1.4 percent annually. This increase in greenhouse gases is one of the contributors to be observed patterns of global warming. On September 16th world ozone day, we can celebrate our success. But we must all push to keep hold of these gains, in particular by remaining vigilant and tackling any illegal sources of ozone depleting substances as they arise, says UN ozone-secretariat. So without the Montreal protocol, life on earth could be a question mark, so keep working hard. “OZONE FOR LIFE”.

The beginning of Art: Visual arts history

Expressing oneself through art seems a universal human impulse, while the style of that expression is one of the distinguishing marks of a culture. As difficult as it to define, art typically involves a skilled, imaginative creator, whose creation is pleasing to the senses and often symbolically significant or useful. Art can be verbal, as in poetry, storytelling or literature or can take the form of music and dance. The oldest stories, passed down orally may be lost to us now, but thanks to writing, tales such as the epic of Gilgamesh or the Lliad entered the record and still hold meaning today. Visual art dates back 30,000 years, when Paleolithic humans decorated themselves with beads and shells. Then as now, skilled artisans often mixed aesthetic effect with symbolic meaning.

A masterpiece of Johannes Vermeer, 1665 –“Girl with a Pearl Earring”.

In an existence that centered on hunting, ancient Australians carved animal and bird tracks into their rocks. Early cave artists in Lascaux, France, painted or engraved more than 2,000 real and mythical animals. Ancient Africans created stirring masks, highly stylized depictions of animals and spirits that allow the wearer to embody the spiritual power of those beings. Even when creating tools or kitchen items, people seem unable to resist decorating or shaping them for beauty. Ancient hunters carved the ivory handles of their knives. Ming dynasty ceramists embellished plates with graceful dragons. Modern pueblo Indians incorporates traditional motifs in to their carved and painted pots. The western fine arts tradition values beauty and message. Once heavily influenced by Christianity and classical mythology, painting and sculptures has more recently moved toward personal expression and abstraction.

Humans have probably been molding clay- one of the most widely available materials in the world- since the earliest times. The era of ceramics began, however, only after the discovery of that very high heat renders clay hard enough to be impervious to water. As societies grew more complex and settled, the need for ways to store water, food, and other commodities increased. In Japan, the Jomon people were making ceramics as early as 11,000 B.C. by about the seventh millennium B.C.; kilns were in use in the Middle East and china, achieving temperatures above 1832°F. Mesopotamians were the first to develop true glazes, through the art of glazing arguably reached its highest expression in the celadon and three color glazes of the medieval china. In the new world, although potters never reached the heights of technology seen elsewhere, Moche, Maya, Aztec, and Puebloan artists created a diversity of expressive figurines and glazed vessels.

The prehistoric cave paintings of El Castillo, Spain were almost 40,800 years old 

When Spanish nobleman Marcelino Sanz de Sautuola described the paintings he discovered in a cave in Altamira, contemporizes declared the whole thing a modern fraud. Subsequent finds confirmed the validity of his claims and proved that Paleolithic people were skilled artists. Early artists used stone tools to engrave shapes into walls. They used pigments from hematite, manganese dioxide, and evergreens to achieve red, yelled, brown, and black colors. Brushes were made from feathers, leaves, and animal hair. Artists also used blowpipes to spray paint around hands and stencils.

“History is remembered by its art, not its war machines”James Rosenquist

The origin of Glass- Why they are transparent?

Archaeological findings suggest that glass was first created during the Bronze Age in the Middle East. To the southeast, in Egypt, glass beads have seen found dating back to about 2500 B.C.E. Glass is made from a mixture of silica sand, calcium oxide, soda, and magnesium, which is melted in a furnace at 2,730°F (1,500°C). Most early furnaces produced insufficient heat to melt the glass properly, so glass was a luxury item that few people could afford. This situation changed in the first century B.C.E. when the blowpipe was discovered. Glass manufacturing spread throughout the Roman Empire in such quantities that glass was no longer a luxury. It flourished in Venice in the fifteenth century, where soda lime glass, known as ‘cristallo’, was developed. Venetian glass objects were said to be the most delicate and graceful in the world.

How glass was made?

It all begins in the earth’s crust, where the two most common elements are silicon and oxygen. These react together to form silicon dioxide, whose molecules arrange themselves into a regular crystalline form known as quartz. Quartz is commonly found in sand, where it often makes up most of the grains and is the main ingredient in most types of glass. You probably noticed that glass isn’t made of multiple tiny bits of quartz and for good reason. The edges of the rigidly formed grains and smaller defects within the crystal structure reflect and disperse light that hits them. But when the quartz is heated high enough, the extra energy makes the molecules vibrate until they break the bonds holding them together and become a flowing liquid, the same way that ice melts into water. Unlike water, though, liquid silicon dioxide does not reform into a crystal solid when it cools. Instead, as the molecules lose energy, they are less and less able to move into an ordered position, and the result is what is called an amorphous solid. A solid material with the chaotic structure of a liquid, which allows the molecules to freely fill in any gaps, this makes the surface of lass uniform on a microscopic level, allowing light to strike it without being scattered in different directions.

How glass is transparent?

Ancient glass materials found in Rome.

Why light is able to pass through glass rather than being absorbed as with most solids? You may know that an atom consists of a nucleus with electrons orbiting around it, but you may not know that an atom has a lot of empty space. So, light passes through these atoms easily without hitting any of these particles. Then why aren’t all materials transparent? This is because, the different energy levels those electrons in an atom can have. Consider an atom of an iron, an electron in it initially assigned to move in a certain orbit. But if it had the enough energy; it could reach the exited state and jump to a closer orbit. So, one of the light photons passing through can provide the needed energy. But there is one thing; the energy from the photon has to be the right amount to get an electron to the next orbit. Otherwise, it will just let the photon pass by, and it just so happens that in glass, the electrons are placed so far from each other, that the photons of visible light can’t provide enough energy for an electron. Photons from ultra violet light give just the right amount of energy, and are absorbed. That’s why you can’t get a suntan through glass. This amazing property of being both solid and transparent has given glass many uses throughout the centuries.

 In the 1950s Sir Alastair Pilkington introduced ‘float glass production”, a revolutionary method still used to make glass. Other developments have included safety glass, heat resistant glass, and fiber optics, where light pulses are sent along thin fibers of glass. Fiber optic devices are used in telecommunications and in medicine for viewing inaccessible parts of the human body.

Are perpetual motion machines possible or not? Free energy?

Most of us might have had this idea, that magnets attract each other in opposite poles, so why can’t we use this to create free energy. Like placing a magnet or a metal in a car and attach the other magnet with a rod or something and place it in front of the car that keeps them attract each other. With this idea, we can move the car without any energy, forever. A perpetual motion machine is a device that is supposed to work indefinitely without any external energy source. Imagine a windmill that produced the breeze it needed to keep rotating or a light bulb whose glow provided its own electricity. These devices have captured many inventers’ imaginations because they could transform our relationship with energy. It sounds cool right? But there is only one problem, it won’t work.

Bhaskara’s wheel -The oldest perpetual motion machine

In countless instances in history, people have claimed that they have made a perpetual motion machine. Around 1159 A.D. a mathematician called Bhaskara the learned sketched a design for a wheel containing curved reservoirs of mercury. He reasoned that as the wheels spun, the mercury would flow to the bottom of each reservoir, leaving one side of the wheel perpetually heavier than the other. The imbalance would keep the wheel turning forever. Bhaskara’s drawing was one of the earliest designs for a perpetual motion machine. And more people have claimed that they made a perpetual motion machine, like Zimara’s self blowing windmill in the1500s, the capillary bowl where capillary action forces the water upwards, the oxford electric bell, which takes back and forth due to charge repulsion, and so on. In fact the US patent office stopped granting patents for perpetual motion machines without a working prototype.

Why perpetual motion machines won’t work?

Ideas of perpetual motion machine all violate one or more fundamental laws of thermodynamics. These laws describe the relationship between different forms of energy. The first law of thermodynamics says that “Energy neither be created nor be destroyed”. You can’t get out more energy than you put in. that rules out a useful perpetual motion machine right away because a machine could only ever produce as much as it consumed. There wouldn’t be any leftover energy to power a car or charge a phone. But what if you just wanted the machine to keep itself moving? Let’s take the Bhaskara’s wheel, the moving parts that make one side of the wheel heavier also shift its center of mass downward below the axle. With a low center of mass, the wheel just swings back and forth like a pendulum and will stop. In the 17th century, Robert Boyle came up with an idea for a self watering pot. He theorized that capillary action, the attraction between liquids and surfaces that pulls water through thin tubes, might keep the water cycling around the bowl. But if the capillary action is strong enough to overcome gravity and draw the water up, it would also prevent it from falling back into the bowl.

John Keely’s perpetual motion machine

For each of these machines to keep moving, they had to create some extra energy to nudge the system past its stopping point, breaking the first law of thermodynamics. There are ones that seems to keep moving, but in reality, they invariably turn out to be drawing energy from some external source. Even if engineers could design a machine that didn’t violate the first law of thermodynamics, it still wouldn’t work in the real world because of the second law. The second law of thermodynamics tells us that energy tends to spread out through processes like friction, heating. Any real machine would have moving parts or interactions with air or liquid molecules that would generate tiny amount of friction and heat, even in a vacuum. That heat is energy escaping, and it would keep leeching out, reducing the energy available to move the system itself until the machine inevitably stopped. Like I said about the idea of a car with magnets, the magnets in it won’t able to move the car. Even if the magnet is so powerful to move the car, the friction came into action and will eventually stops the car. So these two laws of thermodynamics will destroy every idea for perpetual motion. With these, we can conclude that perpetual motion machines are impossible.

  YOU  CAN’T  GET  SOMETHING  FOR  NOTHING.