Network Security

Network security refers to the monitoring and control of illegal access, exploitation, and any undesired changes to the networking system. A computer networking system strategy that ensures the security of an organization’s assets, software, and hardware resources is known as network security.

Why Do We Need Network Security?

Connecting our gadgets to the internet and other networks offers us a whole new world of possibilities. We can get the information we need without keeping it on our gadgets indefinitely. We are able to interact with one another, which allows us to collaborate and coordinate our efforts. The networks that enable us to operate our lives are made up of these linked gadgets. 

Unless properly secured, any network is vulnerable to malicious usage and accidental harm. Private data, such as trade secrets and client information, might be revealed due to hackers, disgruntled workers, or inadequate security measures inside the business.

For example, losing private research can cost a company millions of dollars by robbing it of the competitive advantages it paid for. While hackers steal consumer information and sell it to be utilised in fraud, the company suffers poor press and public confidence.

Rather than causing network damage, most frequent network assaults are aimed to obtain access to information through spying on users’ conversations and data.

Attackers, on the other hand, may do more than just take data. They may be able to cause harm to users’ devices or manipulate systems in order to obtain physical access to facilities. This puts the organization’s assets and members in jeopardy.

Data is kept safe, and susceptible systems are protected from outside tampering, thanks to effective network security measures. This helps network users to be secure while focusing on the organization’s objectives.

Information security is needed for the following given reasons:

  • Unwanted Changes – To secure information against unauthorized users unintentionally or purposefully altering it.
  • Information Loss and Proper Delivery – To prevent data loss and ensure that it reaches its rightful owner in a timely manner.
  • Non-repudiation – To ensure that each node receives an acknowledgement of a message in order to defend against the sender’s rejection in particular scenarios. Let’s say a consumer places an order to buy a few shares of XYZ in the broader market, but the transaction is denied after two days since the rates have dropped.
  • Hiding the Identity of the Original Sender – To prevent a certain network user from sending any mail or message in such a way that it seems to the recipient that it was sent by a third party. For example, a user X creates a message with some favourable instructions for his personal benefit and sends it to user Y in such a way that Y accepts the message as coming from Z, the organization’s boss.
  • Inappropriate Delay – To protect the data from any unintentional delays in the path taken to deliver it to the intended destination within the specified time frame.
  • Corrupting or Deleting – To protect our hardware, like as hard drives, PCs, and laptops, from malware, viruses, and other threats that might harm our system by corrupting or destroying all of the data contained on it.
  • Malware & Unwanted Software – To safeguard our computers against malicious software that, if installed, can destroy our systems in the same way that hackers do.

INDIA AND INDUSTRIAL REVOLUTION 4.0 Part-2

This article is in continuation with the previous part INDIA AND INDUSTRIAL REVOLUTION 4.0 Part-1.

Reasons why Industrial Revolution 4.0 is lagging-

1. Security is a crucial foundation of the Internet while the major challenge for the Industrial Revolution 4.0. As time goes the trend of Industrial Revolution 4.0 inflates from millions of devices to billions. As increasing the number of connected devices, the chance to exploit safety vulnerabilities is also increasing, in cheap or low standard designed devices, due to incomplete data streams, the chances of data theft is increased by which people’s health and safety can be risky. Many IoT arrangements will also include collections of similar or adjacent similar devices. This homogeneity expands the potential impact of any single security weakness by the total number of devices that all have the same features.

2. As Authenticity, trustworthiness and confidentiality are important aspects there are some other requirements also important like discriminatory access to certain facilities, preclude them from sharing with other things at certain Times and business communications involving smart objects would need to be secure from opponents’.The data networks are still delicate and also costly in comparison to other developed countries. From an Indian perspective, the cloud storage operation is still in the emerging stage. Transmit the data to a cloud service for processing, sometimes includes a third party. The gathering of this information leaks legal and regulatory challenges facing data protection and privacy law. To realize the opportunities of the Industrial Revolution 4.0, some new strategies will be required for privacy choices through a broad range of expectations, while still developing innovation in new technologies and services.

3. Absence of standards and documents can assist Senseless activities by devices. Low standard or cheap designed and configured devices have undesirable consequences for the networking resources. Without standards to guide developers and manufacturers, sometimes design products that operate in disruptive ways on the Internet. When any technology has a standard development process then it can be easily available everywhere and can be used by all applicants and increase the growth also. While in today’s world, global standards are followed by every local station.

4. Implementation of every technology requires a team of skilled persons who have ample knowledge of network, hardware, software and about that technology. And India is developing at this point where manpower thinks when technology is spread, they lose their job and there is no life of new technology. So, they don’t take any initiative to learn about it. So, every organization face lots of problem during their changeover phase from the legacy systems to IoT enabled systems. Similarly, Scalability, Fault tolerance and Power supply are also big challenges in India.

5. Advanced technologies require advanced mechanisms which require more amount of money. As India is a developing country, it is not possible to invest on a large scale in Industrial Revolution 4.0. As a result of failed fueled ‘money’, India is not able to cope with Industrial Revolution 4.0.

6. Another major problem with a developing country like India is the fuel needed to run. While the population is on a way steady rise, demand is quite increasing. India produced 557 million tons (metric tons) of coal in 2012-13, and India’s rapidly growing power industry consumed the majority of it. Coal production has steadily increased since the industry was nationalized in the 1970s. A trend almost certain to accelerate as the country faces growing urbanization and an expanding middle class, India has a high dependence on imports for its petroleum needs and is the world’s fourth-largest importer of crude oil.

7. The percentage of illiteracy in India is alarming. Every five persons among ten in India are illiterate. The condition in villages is worse than in cities. Though several primary schools have been set up in rural India, the problem persists. Also, providing education just to children won’t solve the problem of illiteracy, as many adults in India are also untouched by education. The education system of India is blamed now and then for being too theoretical but not practical and skill based. Students study to score marks, not to gain knowledge. This so-called modern education system was introduced by the colonial masters to create servants who could serve but not lead, and we still have the same education system. Rabindranath Tagore had written many articles offering suggestions to change the education system of India. But still, success is as elusive as ever.

Read more about India and Industrial Revolution in next part, INDIA AND INDUSTRIAL REVOLUTION 4.0 Part-3.

Screen-sharing is the new Movie Theatre

And it’s so much better than what you’d initially have ever thought of. 

When the initially was introduced to me, the most apprehensive of situation that I had to face was the fact that would it even be enticing enough to make me watch whatever that’s been screen-shared to me for such a long time? 

But trusting the fact that with all the apprehension considered it has proved to be such a delight. Yes, network issues would occur, or there might be delay or error with the video that has been shared, but beyond that — the time you’d invest in watching something new, something that you enjoy — it could never distract you, however its also a commitment not just halfhearted commitment but a promising act. 

We could even go beyond Zoom, and talk about options such as Amazon Watch Party for those who have laptops accessible or check out websites that are available and trustworthy, this process could be lengthy to have a site that you trust enough, but make no mistake that it’s enjoyable none the less.. sorry if you’d find me just redundantly reiterating it. 

Moving on, for someone who is secured within themselves and don’t really like social interaction or outings, this seems just so appropriate without the whole pandemic upon our heads. We are comfortable on our own couches or beds a comforter over us, we could even chit chat about the movie or show that you’re watching with your group, and comments on sections without being yelled at by someone else .. which you can’t do, while in a theatre. 

Also, without everything aside, you can still after the entertainment is done and dusted, you can still stay connected and converse with your friend over a cup of coffee — within the confines of your own home.

So, if you’ve not yet tried this method with your friend (s), make sure t=you create a small time in your schedule to unwind, even if it is for 30 minutes, and let me know how it goes. 

Content marketing

When it comes to knowing about a particular word it is better to break it and understand the half-half syntax to get the whole picture. So content marketing which seems a complex term to get, turns out to be a simple one. Content is any written form of piece and marketing is selling of a particular commodity therefore when we talk about content marketing we refer to writing valuable and relevant content for a good which depicts its value and give entire information about it.

It is not as straightforward as it seems as writing content that attracts the reader is very important. Simple tips to write a good content for a product and for making content marketing successful:

Answer your audience questions and write according to giving every answer possible. providing them something of value keeping them wanting more.

Do audience segmentation to divide different goals people considering their interests and write accordingly.

Content writing goals are of four categories- entertain, inspire, educate, convence

Think for which media are you writing for, types of media- owned, earned, and paid. Write the necessary information required for the three.

Optimize list content- The featured snippets for listicles show a bulleted or numbered list on the results page. They are great for “best” and “how-to” searches. With a list, searchers get a quick and easy-to-understand answer.

Read value-added materials like books, magazines, newspapers to enhance your vocabulary and to come across new styles of writing.

There are many more things you can do just reseach, get aware, take your thoughts and pen and get started.

It seems difficult and vast until the beginning so doesn’t worry if you like writing and can think of innovative lines this field is for you, you can do wonders!

CLOUD COMPUTING

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the Internet.

Cloud computing is the delivery of computing services , including servers, storage, databases, networking, software, analytics, and intelligence over the Internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale.

There are three main types:

• Infrastructure as a service (IaaS)
• Platform as a service (PaaS)
• Software as a service (SaaS)

Examples of Cloud Computing —

The main types of cloud computing include software as a service, platform as a service, and infrastructure as a service. Serverless computing, also known as function as a service (FaaS), is also a popular method of cloud computing for businesses. SaaS or Software as a Service.

Benefits of cloud computing —

• Reduced IT costs. Moving to cloud computing may reduce the cost of managing and maintaining your IT systems.

• Scalability.

• Business continuity.

• Collaboration efficiency.

• Flexibility of work practices.

• Access to automatic updates

The  most obvious uses of cloud computing is the mobility that it brings, both to the recreational user, as well as to the corporate and business user. Many of us are already familiar with some cloud computing services, like Google Docs, or even email services.

Five characteristics of cloud computing
On-demand self-service.

• Cloud computing resources can be provisioned without human interaction from the service provider.

• Broad network access.

• Multi-tenancy and resource pooling.

• Rapid elasticity and scalability. 

• Measured service.

Google Drive is a cloud-based storage solution that allows you to save files online and access them anywhere from any smartphone, tablet, or computer. … Drive also makes it easy for others to edit and collaborate on files.

Disadvantages of cloud computing

• Data loss or theft.

• Data leakage.

• Account or service hijacking.

• Insecure interfaces and APIs.

• Denial of service attacks.

• Technology vulnerabilities, especially on shared environments.


360 WANDER WRITER

WELCOME TO TECHIE WORLD

CODING


if(brain!=empty)

{

keep coding();

}

else

{

order Coffee();

}


Coding is nothing but the computer language through which we can develop apps and any other software required for this technological world.

Most of them think it to be a difficult task . Actually it is not!. You have to be patient enough to learn and take effort to learn it with persistence. Do you know that all Indians have the capacity to be a good coder! Yes, Most Indians are good at mathematics. This logical thinking helps in problem-solving. Hence having a greater chance to be a good coder.

Computer coding is an important skill in the current job market. It serves as a valuable skill in several fields and job opportunities across the oceans. Knowing how to code is a valuable skill for anyone looking for a job in the technology or computer science fields.

Coding Languages

  • C Programming
  • HTML
  • CSS
  • JAVASCRIPT
  • C++
  • PYTHON
  • PHP
  • SQL …and many more

Careers to coders

  • Database Administrator 
  • Web Developer 
  • Information Security Analyst
  • Applications Developer
  • Instructional Designer
  • Digital Marketing Manager

sharpen the coding skills

Reading Books

Reading books to acquire knowledge is a primary way. The technology might have changed but the importance of books and its impact on human minds on gaining knowledge remains the same. Here are some books listed for reference:

Something new each day…

You should take some time out each day and challenge yourself with something new. Of course, it doesn’t have to be complicated.

Play a coding game 

Here are some :

Platforms to practice:

  • geek for geeks
  • Leetcode
  • Hackerrank
  • Codechef
  • Topcoder

AGNI:THE FAMILY OF BALLISTIC MISSILES

The name Agni(meaning fire) was given after one of the 5 elements in nature(Agni, Vayu, Prithvi, Akash, Jala). Agni missiles are medium to intercontinental-range ballistic missiles, developed under the Integrated Guided Missile Development Programme, INDIA. The family of missiles consists of AGNI I, AGNI II, AGNI III, AGNI IV, AGNI V, AGNI P, AGNI VI. And here is a brief description of each one of them.

India night-tests Agni-I missile

Agni I:

Agni I is an intermediate-range ballistic missile, it is 14.8 m long, with a diameter of 1.3 meters, and weighs 22,000 kgs. With a maximum payload of 1,000 kgs, the missile could extend its range up to 1,200 km of distance. Agni I is used by the SCF of the Indian Army. It is made of all-carbon composite materials to protect the payload during its re-entry stage. It is designed to be launched from Transporter-Erector-Launcher (TEL) vehicles, either by road or rail-mobile through transportation. The development of this missile began in 1999, and was first tested in January 2002 from a TEL vehicle at the Interim Test Range on Wheelers’ Island of India’s eastern coast. This missile has relatively high accuracy, simplicity, and due to its combination of an inertial guidance system with a terminal phase radar correlation targeting system on its warhead. 

Agni II missile

Agni II:

Agni II is a medium-range, two-and-half-stage solid propellent ballistic missile, and is 20 m long, with a diameter of 1 m, and weighs around 26,000 kg. With a payload of 820-2,000 kgs, the missile could extend its range from 2,000 to 3,500 km. Agni II was first tested on 11th April 1999 at the Wheelers’ Island of the Odisha coast using IC-4 launch pad, over the range 2,000 to 2,200 km. The Agni II uses a combination of inertial navigation and GPS in its guidance module as well as dual-frequency radar correlation for terminal guidance. The 20-meter-long, two-stage ballistic missile has a strike range of 2,000 km to 3,000 km during the night trail of a nuclear-capable intermediate-range ballistic missile on 16th Nov 2019.

Agni-3 ballistic missile successfully launched by India’s Strategic Forces Command (SFC) from Wheeler Island, off the coast of Odisha on September 21, 2012.

Agni III:

Agni III is an intermediate-range, two-stage solid propellent ballistic missile, it is shorter (17 m and wider and 2 m in diameter) compared to other missiles (Agni I and Agni II), and weighs up to 44,000 kg. With a payload of 2,500 kg, Agni III could extend its range from 2,000 to 3,000 km. It is made using advanced carbon composite materials, while the second-stage booster is made of iron-based steel alloy. Agni-III was first tested on 9th July 2006 from Wheeler Island on the coast of the eastern state of Odisha, by Rail-mobile, possible road-based TEL( Transporter-Erector-Launcher). It was again tested on 12th April 2007 successfully, again from Wheeler Island. The third successive trail-test was fired on 7th May 2008 from Wheelers island, which had a range of 3,500 km, taking a warhead of 1.5 tonnes. It is the most accurate strategic ballistic missile which increases the “kill efficiency” of the weapon. It was reported that with a low payload Agni III and hit a target of over 3,500 km.

Agni IV missile

Agni IV:

Agni IV is an intermediate-range, two-stage nuclear-capable ballistic missile, it is 20 m long, with a diameter of 1 m, and weighs up to 17,000 kg. It was previously called as Agni II prime. Agni IV was first tested on 15th November 2011 and on 19 September 2012 from Wheeler Island(Abdul Kalam Island) off the coast of the eastern state of Orissa. It could reach the target up to the range of 3,500–4,000 km with a payload of 800–1,000 kg. On 20th January 2014, that is during its third test, the missile was lifted off from the launcher and after reaching an altitude of over 800 km, and impacted near the target in the Indian ocean with a remarkable accuracy carrying a payload of 900 kg. Agni IV is equipped with state-of-the-art technologies, that include indigenously developed ring laser gyro and composite rocket motor.

Agni V missile

Agni V:

Agni V is an intercontinental-range, three-stage solid-fuel ballistic missile, it is 17 m long, with a diameter of 2 m, and weighs up to 50,000 kg, developed by the Defense Research and Development Organisation of India. It could reach a target of more than 5,500 km. It was first test-fired on 19th April 2012, from Abdul Kalam Island formerly known as Wheelers Island off the coast of Odisha. It is a canister launch missile system and ensures that it has the requisite operational flexibility and can be swiftly transported and fired from anywhere. The second test launch of Agni-V was completed on 15th September 2013 and the canisterized version was launched in January 2015.

Agni P missile

Agni P:

Agni prime is a medium-range, two-stage solid-fueled ballistic missile, it is half of the weight of Agni III, developed by the Defense Research and Development Organisation, India. Both the first and second stage of the missile was made of composite materials. It could extend its range up to 1,000-2,000 km. As per DRDO, Agni-Prime is a new generation advanced variant of the Agni family, launched on 28th June 2021. “Being a canister-launched missile, Agni-P will give the armed forces the requisite operational flexibility to swiftly transport and fire it from anywhere they want. The test at 10:55 met all mission objectives with a high level of accuracy,” says DRDO. This missile has followed the Textbook trajectory with a great level of accuracy.

Agni VI under development

Agni VI:

Agni VI will be a four-stage intercontinental ballistic missile, currently in the hardware development phase and expected to have a Multiple Independently targetable Reentry Vehicle(MIRV) as well as a Maneuverable Reentry Vehicle(MaRV). It is expected as the latest and most advanced version among the Agni Missiles.

References:

http://www.indianexpress.com/news/india-successfully-testfires-agni-i-ballist/715859/

https://timesofindia.indiatimes.com/india/india-successfully-test-fires-new-generation-agni-p-ballistic-missile/articleshow/83914848.cms

https://en.wikipedia.org/wiki/Agni_(missile)

https://frontline.thehindu.com/dispatches/india-successfully-test-fires-agni-prime-missile/article35022926.ece

credits to the right owner of the images used.

SOCIAL MEDIA

Needs and merits

The ability to build real relationships is one of the most important aspects of social media and a key factor in attracting people of all ages, genders and nationalities. It is an important part of developing healthy social networks and powerful social network tools. People can share their business, products and services with the world as long as they stay connected and use social media. Social networks allow people to communicate, and everyone can update and report at any time. Companies make full use of social media to improve your online reputation and greatly help increase sales and personal income.

  • You need to make sure to use all social media platforms to gain insight into the needs of your customers. To make the most of social media for your business, make sure you have a content marketing plan. If you need content on any social media platform, you can use social media asset management tools to create high-quality content.
  • You can also use social media to track what people say on social media. Although social media is mainly used by the public, the government also uses it to raise public awareness.
  • Although the use of social media for teaching can be distracting, educators can do everything they can to guide students to develop good habits and practices, Benefit. Regarding the advantages provided by social networks.
  • If you consistently and continuously invest time and effort, you will see the real benefits of social media marketing. Social media can give your business a huge advantage by helping you connect with your target audience. It can cover a large number of people, but for social media, it is also a media sharing network.
  • Social media advertising is one of the two components used together to attract potential customers and spread information and brand awareness. Different from classic ads. When you actively post on social media pages, social media marketing is easy.

SOCIAL MEDIA AS A BLESSING

There are some people who actually make a good or can say the best use of the social media. Like if we talk about the young entrepreneurs who have just begun with a start up but is lacking in public attraction and funding so they create a short 30seconds advertisement and attach it with the trending apps which work through network connection . Talking about the awareness ;both the social media and mass media have been playing their roles very well by keeping their viewers updated with the latest screamers , exposing the Scams ,scandals , and even the worse parts of humanity .

Scope of SEO in india

The developing utilization of online applications has opened diverse vocation roads for youth across the globe. Regardless of an applicant is a fresher or an accomplished one, these new position profiles help one in better proficient future and furthermore great motivations.

One of such occupation profile is of Search Engine Optimization (SEO) proficient. As a large portion of individuals across the world use web crawlers like Google to determine their questions. Website design enhancement is one of the advanced showcasing methods that assistance in better streamlining of a site and rank it top in web search tools for applicable questions.

There are many type of improvement, for example, on location content and site page enhancement or site backlink streamlining. Web optimization not just plans to rank better sites or drive quality traffic yet in addition assist with building brand perceivability in the online world.

Different ongoing investigations recommend that SEO will be a significant showcasing device for creating leads and procuring new clients. It has constrained pretty much every organization to put more in SEO prompting expanded interest of SEO experts in India. The developing interest of SEO experts has constrained alumni and website specialists to learn SEO for a superior profession ahead.

E – Waste : the Digital Dark Side


We live in a technology-driven world, and technology is rapidly evolving. Mobile phones have been replaced by smart phones, televisions have been replaced by LEDs and LCDs, and desktop computers have been replaced by laptops and tablets. When a new model of a product is introduced to the market, the previous one quickly becomes obsolete, and outmoded items are often discarded as waste. These unwanted, broken, or obsolete electrical goods have reached the end of their useful life. Those who have reached the end of this are known as e-waste, in which some electronic products are included for quitting, such as computers, mobile phones, TVs, washing machines, refrigerators, and so on.


Millions of tonnes of e-waste are produced annually in rich countries; worse, e-waste from illegal countries such as Japan, Malaysia, Ghana, Nigeria, Pakistan, and India, as well as developed countries such as the United States and Japan, should be dumped in developing countries. It used to be like way. In developed countries, the expense of treating e-waste is significant. This is due to the relatively low cost of shipbuilding, which encourages garbage shipment to underdeveloped countries..
Local residents, industry owners, and labourers are allowed to collect valuable goods from this garbage according to their needs in underdeveloped countries, where waste is put into the underground, consuming and ill-equipped recycling facilities. The majority of them amass important information in order to leave the others behind. Acid baths and electrical burns are employed to recover valuable components. These tactics, in turn, cause major health issues and may harm individuals who participate in them.
Hazardous metals such as lead, mercury, arsenic, copper, cadmium, nickel, zinc, gold, silver, and beryllium are likely utilised in monitors such as circuit boards, electric parts, mono boards, and cables.

These metals are known to emit toxic poisons into the environment through soil, causing health problems in both animals and humans. Chemicals can likely be generated on land, resulting in pollution of both land and water. Important components of e-waste, polychlorinated biphenyl and polybrominated defanel ether have a hazardous side effect.
They are the primary contributors to ozone depletion. Food chains and food traps also store these chemicals, posing a major hazard to all animals on the earth.

In fact, the growing environmental footprint of e-waste is a source of concern. Consumers and producers are jointly responsible for managing the growing amount of e-waste. The majority of electronic materials include reusable components. Metals such as copper, aluminium, lead, and iron are found in this reusable component. To properly remove this substance from trash materials, an unique eco-friendly process should be created.
Recycling models must be promoted by both manufacturers and approved recyclers. Producers can join the recycling chain by offering a collection service and, in comparison to the unorganised sector, can increase their buyback offer. Consumers have a natural tendency to derive economic value from rubbish, and this is where financial incentives to participate in the formal recycling system can be provided. They should be urged to get rid of all of their electronics and electronic items. Many corporations, including as Dell, Apple, and HP, have launched recycling programmes. When it comes to trash management, the 3R concept, which entails reuse and recycling, can be quite useful.


In the Indian context, E-Parisissa is a fantastic effort for e-waste management. Bangalore generates 8000 tonnes of computer garbage each year, which is subsequently sold to scrap merchants. E-Parisia, an environmentally friendly recycling facility on the city’s outskirts, is India’s first e-waste recycling facility. Its goal is to reduce pollution and landfill waste by recycling valuable metals, plastic, and glass in an environmentally acceptable way.

History of Unix

Origins of Unix

UNIX development was started in 1969 at Bell Laboratories in New Jersey. Bell Laboratories was (1964–1968) involved on the development of a multi-user, time-sharing operating system called Multics (Multiplexed Information and Computing System). Multics was a failure. In early 1969, Bell Labs withdrew from the Multics project.

Bell Labs researchers who had worked on Multics (Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joseph Ossanna, and others) still wanted to develop an operating system for their own and Bell Labs’ programming, job control, and resource usage needs. When Multics was withdrawn Ken Thompson and Dennis Ritchie needed to rewrite an operating system in order to play space travel on another smaller machine (a DEC PDP7 [Programmed Data Processor 4K memory for user programs). The result was a system called UNICS (UNiplexed Information and Computing Service) which was an
’emasculated Multics’.

Unix Development

The first version of Unix was written in the low-level PDP-7 assembler language. Later, a language called TMG was developed for the PDP-7 by R. M. McClure. Using TMG to develop a FORTRAN compiler, Ken Thompson instead ended up developing a compiler for a new high-level language he called B, based on the earlier BCPL language developed by Martin Richard. When the PDP-11 computer arrived at Bell Labs, Dennis Ritchie built on B to create a new language called C. Unix components were later rewritten in C, and finally with the kernel itself in 1973.

Since it began to escape from AT&T’s Bell Laboratories in the early 1970’s, the success of the UNIX operating system has led to many different versions: recipients of the (at that time free) UNIX system code all began developing their own different versions in their own, different, ways for use and sale. Universities, research institutes, government bodies
and computer companies all began using the powerful UNIX system to develop many of the technologies which today are part of a UNIX system. By the late 1970’s, a ripple effect had come into play.

Key Factors

1969 The Beginning

The history of UNIX starts back in 1969, when Ken Thompson, Dennis Ritchie and others started working on the “little-used PDP-7 in a corner” at Bell Labs and what was to become UNIX.

1980 Xenix

Microsoft introduces Xenix. 32V and 4BSD introduced.

1983 System V

Computer Research Group (CRG), UNIX System Group (USG) and a third group merge to become UNIX System Development Lab.
AT&T announces UNIX System V, the first supported release. Installed base 45,000.

1991

UNIX System Laboratories (USL) becomes a company – majority owned by AT&T. Linus Torvalds commences Linux development.
Solaris 1.0 debuts.

1998 UNIX 98

The Open Group introduces the UNIX 98 family of brands, including Base, Workstation and Server. First UNIX 98 registered products shipped by Sun, IBM and NCR. The Open Source movement starts to take off with announcements from Netscape and IBM. UnixWare 7 and IRIX 6.5 ship.

2007

Apple Mac OS X certified to UNIX 03.

What is Firewall and its types

A firewall forms a barrier through which the traffic going in each direction must pass. A firewall security policy dictates which traffic is authorized to pass in each direction. A firewall may be designed to operate as a filter at the level of IP packets, or may operate at a higher protocol layer. Firewalls can be an effective means of protecting a local system or network of systems from network-based security threats while at the same time affording access to the outside world via wide area networks and the Internet.

TYPES OF FIREWALLS

  1. Packet Filtering Firewall

It is simplest, fastest firewall component. It is Foundation of any firewall system. Examine each IP packet (no context) and permit or deny according to rules. Hence restrict access to services (ports). A packet filtering firewall applies a set of rules to each incoming and outgoing IP packet and then forwards or discards the packet. The firewall is typically configured to filter packets going in both directions (from and to the internal network).

2. Stateful Packet Filters

A traditional packet filter makes filtering decisions on an individual packet basis and does not take into consideration
any higher layer context. To understand what is meant by context and why a traditional packet filter is limited with regard to context, a little background is needed. Most standardized applications that run on top of TCP follow a client/server model. A stateful packet inspection firewall reviews the same packet information as a packet filtering firewall, but also records information about TCP connections.

3. Application Level Gateway (or Proxy)

An application-level gateway, also called an application proxy, acts as a relay of application-level traffic. Application-level gateways tend to be more secure than packet filters. Rather than trying to deal with the numerous possible
combinations that are to be allowed and forbidden at the TCP and IP level, the application-level gateway need only scrutinize a few allowable applications. In addition, it is easy to log and audit all incoming traffic at the application level. A prime disadvantage of this type of gateway is the additional processing overhead on each connection.

4. Circuit Level Gateway

This can be a stand-alone system or it can be a specialized function performed by an application-level gateway for certain. A circuit-level gateway does not permit an end-to-end TCP connection; rather, the gateway sets up two TCP connections,
 Between itself and a TCP user on an inner host.
 Between itself and a TCP user on an outside host.

Virtual Reality: Its history and future

Virtual Reality has a brief yet rich history with many ups and downs. Even though the formal name was defined much later, there had been many attempts that resembled the Virtual Reality that we know it as today. Let’s first define the goal of Virtual Reality. It is to trick the brain to believe that something is real with the help of virtual elements, these elements can be auditory (sound) or visual (sight). But there are many parallel definitions of VR but one of the essential differentiating factors of Virtual Reality compared to other forms of media is that VR has some sort of interactivity. Unlike Movies or 3D movies where a person can only view but not interact, VR facilitates the freedom to touch, interact and control what a person sees on their screen.

History of VR

The fascination with VR goes way back to the 1930s when science fiction writer Stanley G. Weinbaum wrote a story where he mentions about Pygmalion’s spectacles. The wearer would be able to experience the virtual world. VR was further popularized by the popular Sci-fi movie, Star Trek: The Next Generation and its Holodeck.  

­One of the first examples of VR HMD (Head Mounted Display) was ‘Sword of Damocles’ developed by Ivan Sutherland and his student Bob Sproull. The HMD was connected to a computer. The contraption setup was intimidating, cumbersome, and heavy. The graphics that were shown if the HMD were quite simplistic and trivial, but it was a convincing step towards the VR we know today. The term ‘VR’ was popularized in the 1980s by Jaron Lanier. By the end of the 1980s, NASA with the assistance of Crystal River Engineering, created Project VIEW. It was a VR simulator that was developed to train the astronauts. The 1990s saw the use of VR in multimedia and mainstream commercial spaces. Numerous virtual reality arcades were introduced in the public spaces where players could play games with immersive stereoscopic 3D visuals. The mid-1990s saw the VR foray by console manufacturers. Nintendo and SEGA both showcased their VR gaming headsets but both were a commercial failure due to technical limitations and lack of software support.

In 2012 Oculus Kickstarter had raised 2.5 million dollars and this gave the startup a monetary jumpstart that previous VR projects were not able to attain. In 2014 Facebook bought the Oculus and this ensured that the VR startup would be adequately funded in their VR developments. 2014 also saw the launch of numerous other VR developments like Google cardboard, Sony PSVR and Samsung Gear VR. In 2016 HTC released its advanced VR headset, HTC Vive. Now the focus was to make VR truly standalone free from the assistance of a dedicated computer or a smartphone.

The Future

The Future of VR looks bright, there are many factors for this but one of the major factors is the price of VR has significantly gone down. There are continual developments in the sphere of VR and various new technological innovations are attempting to make the adoption of VR much more seamless, comfortable and intuitive to use. The use of VR is not just limited to gaming but now VR is also used for many commercial and business purposes. Recently Microsoft signed an agreement with the US government to supply 120,000 semi-custom versions of Hololens VR/AR headsets. VR is increasingly used in the health and manufacturing sectors as well. With a compound annual growth of 21.6% from 2020 to 2027, it seems that VR is only going to get more mainstream in the future.

References:

https://nix-united.com/blog/the-past-present-future-of-virtual-reality/

https://www.vrs.org.uk/virtual-reality/history.html

News and Current affairs

What is news

News is information about current events. This may be provided through many different media: word of mouth, printing, postal systems, broadcasting, electronic communication, or through the testimony of observers and witnesses to events. … The genre of news as we know it today is closely associated with the newspaper.

What is current affairs

Technically Current Affairs is defined as a genre of broadcast journalism where the emphasis is on detailed analysis and discussion of news stories that have recently occurred or are ongoing at the time of broadcast.

Difference

Current affairs is a genre of broadcast journalism. This differs from regular news broadcasts that place emphasis on news reports presented for simple presentation as soon as possible, often with a minimum of analysis.

Day to Day life many things happened that information will pass to us in the form of news and current affairs

Wireless Sensor Networks (WSN)

Sensors, a controller, and a communication system make up a typical sensor network. Wireless Sensor Networks, or simply WSNs, are networks in which the communication mechanism in a Sensor Network is implemented using a Wireless protocol.

Sensor Nodes are placed in high density and frequently in huge quantities to provide sensing, data processing, embedded computing, and communication in a Wireless Sensor Network.

Elements of WSN

A typical wireless sensor network is made up of two parts. They are as follows:

  • Sensor Node
  • Network Architecture

Sensor Node

In a WSN, a Sensor Node has four fundamental components. They are as follows:

  • Power Supply
  • Sensor
  • Processing Unit
  • Communication System

The sensor takes analog data from the physical environment, which is then converted to digital data by an ADC. The main processing unit, which is generally a microprocessor or a microcontroller, processes and manipulates data intelligently.

A communication system consists of a radio system for data transmission and receiving, which is generally a short-range radio. Due to the fact that all of the components are low-power electronics.

Sensor Nodes include not just the sensing component, but also key characteristics like as processing, communication, and storage.

Network Architecture

The networking of these sensor nodes is requirements is to ensure when a large number of sensor nodes are put in a broad region to cooperatively monitor a physical environment. A sensor node in a WSN uses wireless communication to connect not only with other sensor nodes but also with a Base Station.

The base station delivers orders to the sensor nodes, and the sensor nodes collaborate to complete the task. The sensor nodes relay the data back to the base station after gathering the required information.

A base station can also connect to other networks through the internet. A base station receives data from sensor nodes and conducts basic data processing before sending the updated information to the user through the internet.

A single-hop network design is one in which each sensor node is linked to the base station.

In Multi-hop network architecture, the data is sent through one or more intermediary nodes.

Network Topologies in WSN

A few alternative network topologies utilized in WSNs are listed below.

Star Topology

Every node in the network is connected to a single central node, known as a hub or switch, in a star architecture.

Tree Topology

A tree topology is a hierarchical network in which the top node is a single root node, which is connected to numerous nodes at the next level, and so on.

Mesh Topology

Apart from delivering its own data, each node in a mesh architecture also functions as a relay receiving data from other linked nodes. Fully Connected Mesh and Partially Connected Mesh are the two types of mesh topologies.

Each node in a fully connected mesh topology is connected to all other nodes, whereas a node in a partially connected mesh topology is connected to one or more surrounding nodes.

(The left diagram is fully connected mesh topology and the right diagram is partially connected mesh topology.)

Applications of Wireless Sensor Networks

Wireless Sensor Networks have an almost limitless number of uses. Heating, ventilation, and air conditioning (HVAC), air traffic control (ATC), automotive sensors, earthquake detection, disaster management, tsunami alert systems, industrial automation, personal health care, weather sensing, and monitoring are just a few of the applications of wireless sensor networks.