Longest running light bulb since 1901: The case of Planned Obsolescence

Centennial Light is the longest-running electric light bulb on record. It has been running continuously since 1901 and it has never been switched off. It is located in Fire Station 6 in Livermore, California. The ordinary dim light bulb looks like any other bulb and there is also a camera that live-streams the light bulb onto the internet.

Link for the official website and live webcam of the light bulb.

http://www.centennialbulb.org/photos.htm

It was manufactured in the late 1890s by the Shelby Electric Company, of Ohio, using a design by the French-American inventor Adolphe Chaillet. It has operated for over 100 years with very few interruptions. In 2011, it passed a milestone: One million hours of near-continuous operation. In 2015 it was recognized by Guinness World Records as the world’s longest-burning bulb.

The 60-watt bulb uses a carbon filament. One of the reasons for its longevity is that it seems to have an incredibly durable vacuum seal. There have been some researches done on bulbs manufactured by Shelby Electric Company of that era. But no one really exactly knows how these eternal bulbs were made as they were experimenting with various but the company was experimenting with a variety of designs at the time.

The electric model was quite different when first homes in The U.S had electricity. The servicing was the responsibility of the electric companies and customers would purchase entire electrical systems manufactured by a regional electricity supplier. The companies would also take care of the installation and servicing of any burned out electric bulbs would be replaced for free.

It made more logic for the suppliers to manufacture bulbs that would last longer and would burn out as least as possible. But this business model was later replaced and homeowners were responsible to change the light bulbs. It was soon realized that it would be more profitable to make cheaper bulbs that burned out faster. Since the mid-1900s goods were manufactured with a pre-determined expiry date aimed at forcing consumers into repeat purchases. This phenomenon has only been exacerbated in recent years. This can also be called planned obsolescence.

In 1924, the life span of the light bulbs was at least 2,500 hours. Phoebus cartel was formed in 1925 in Geneva. It comprised of the major incandescent light bulbs manufacturers at that time: Osram, General Electric, Associated Electrical Industries, and Philips. The cartel had directed their engineers to cut the life of the bulbs to 1,000 hours, which the engineers did by adjusting voltage and current. The cartel was intended to operate for 30 years but it was starting to fall apart in the early 1930s after General Electric patents expired and as the cartel faced competition from non-member manufactures from other regions. The cartel ceased its operations after the outbreak of World War II in 1939.

Planned obsolescence is a very critical area it does not only decrease the lifespan of the good but as a consequence, it is also wasteful. It is not sustainable for the environment and the main focus of this practice is to maximize profits. It also reminds us that technological innovations are often not accessible in favor of corporate greed.

References:

Dystopian Genre: Analysis and its Significance

The dystopian genre can be categorized into a wide group of literary works of speculative fiction. It usually involves a vision of the future, or an alternate world, which is used by an author to comment on and explore ideas about their own society. It’s has been a popular genre for quite some time. Lets analyze why dystopian literature, in particular, is so important.  

Dystopian literature makes important commentary on the world, societies, and our governments. Humans, since the beginning of the organized society, have always been fascinated by a perfect society which is ‘Utopia’. Since Humans are flawed and our societies mirror that, but it’s also in our nature to strive for better, similar to philosophers who focused on political theory. 

During the medieval age, utopia seemed to be a noble idea among the European authors and philosophers. While its main aim was to depict an ideal society, Dystopia on the other hand was a response where authors argued with the Utopian literature. Dystopias are utopias in the real world as these visionary ideas work well, as ideas. When placed in reality they quickly turn into dystopias.

In order to have an ideal society, humans should be devoid of humanistic values. Writers Depict societies that strive for perfection but ultimately fail as they ignore some vital part of humanity, which makes a convincing dystopian world.

Writers look for flaws that exist in our societies today, grounded in truth, and amplify them. A similar reflection of flaws can be seen in Orwell’s 1984 where it paralleled the problems that were ongoing in that period with the depiction of the overt dictatorial elements present in the Soviet Union and Third Reich. He critically pointed out the Government Surveillance, thought police, the constant change of history, and banning of books, which may seem exaggerated. These elements parallel our society albeit in a more subdued manner. Nevertheless, the parallels are present there, hence 1984 is still a very relevant piece of literature today.    

Similarly in Brave new world, Huxley pointed out that there would be no need for banning books as people will be bombarded with too much information and would be critically incapable to decipher reality with information overload. Where pleasure receptors hijack people’s critical thinking    

Within these two instances, we can interpret that one man’s heaven (Utopia) is another man’s hell (Dystopia). And dystopian writers don’t shy away from being political or radical when they try to describe these phenomena to warn readers.

It has also given rise to many similar genres like science fiction and cyberpunk, dystopian literature can share elements with these themes. In recent times, dystopian literature has also been popularized with the help of Movies, TV shows, and Video Games. 

With the advent of the 20th-century dystopian literature evolved and flourished, many of the revered classic literature in the past century has been a part of this genre. Technology and science progressed and new means of government and bureaucratic institutions were established. This gave writers a new method to introspect the societal trend.   

There is a critical need to academically evaluate these literary writings as its getting more relevant in our present society.

References:

The Story of the Best Selling Video Game of all time: Tetris

Tetris has its origin in the Dorodnitsyn Computing Centre (Research Lab). It was one of the foremost research institutes of the Russian Academy of Sciences, located in St Petersburg, Soviet Union (Now Russia). Created by software researcher Alexey Pajitnov in 1984, Tetris is a simple tile-matching game that took the world by storm upon it’s release.

It was developed for Electronika 60, which was a computer, made in the Soviet Union. This period was the final stage of the Cold war Era and computers were becoming more popular as well.

The game wasn’t intended as a commercial product just like the creation of the music record. But it was to be distributed freely among academic institutions around the Soviet Union and the economic bloc of countries aligned with the USSR in Eurasia, Africa, and the Americas that demonstrated use cases for the software.

As USSR was a communist state, so Pajitnov did not technically own the program as the game was under the ownership of the state. Pajitnov along with the help of a colleague, Dmitry Pavlovsky, and a teen computer programmer, Vadim Gerasimov continued to work on the game project even though commercializing it would have been a risky move under the Soviet government. Gerasimov further ported the game from the old and bulky Elektronika 60 to the more widely used (IBM) compatible PCs.

As Elektronika 60 had no graphics output, the individual blocks in the game were made of different text, but with the port in PC, they were able to support color graphics. This brought the game to life.

Pajitnov and Gerasimov had started distributing Tetris for (PC) in 1985 among their friends and colleagues in various math or computer conventions. Soon the sharing spread and the game was smuggled outside USSR to Hungary. During mid-80s U.S and Japan had a more prevalent console market whereas, in Europe, gaming was primarily done on computers. There was a non-existent software market in Russia and most software was usually copied in floppy disks.

Welcome screen of 1987 version of Tetris

In 1986 Robert Stein, a salesman from the UK-based software company Andromeda spotted Tetris at Hungary’s Institute of Computer Science. He was convinced by the potential of the software and he struck an agreement with Pajitnov to sell the games internationally. But legally Tetris was still under the ownership of the Soviet government.  There was one problem, the agreement was only for the PC and not for any other platform and Stein has struck a deal with Sega to launch the game on their platform. Later Henk Rogers, another salesman from the Netherlands wanted to find a good launch game for the Nintendo’s new Game Boy handheld. The Soviet government was not happy with the Stein deal. But Rogers convinced the Soviet government and they agreed and he also formed a good relationship with Pajitnov. Later Andromeda’s license of Tetris was deemed illegal. Nintendo was given the right to launch the game on its console. The GameBoy was a platform to showcase one of the first video games exported from Russia. The game was a commercial hit and it has been ported to the most number platforms to date. The game also holds the record as the best-selling game of all time. In 1996, Pajitnov was able to reclaim the ownership of the rights and formed the Tetris Company, along with Henk Rogers. Even though he missed collecting the potential royalties for Tetris which were over hundreds of millions, he was still able to secure the future royalties.

References:

Audacity and controversy after its new privacy policy

Audacity is free and open-source software that is available for Linux, Windows, macOS, and other UNIX operating systems. The project was started by Dominic Mazzoni and Roger Dannenberg in the fall of 1999 at Carnegie Mellon University, Pennsylvania. The software was officially released on May 28, 2000. It is a digital audio and recording application. It is one of the most popular free and open-source software with over 100 million downloads.

In July 2021, the software was acquired by the Muse Group. The acquisition has brought several changes in the privacy policy of the software. Audacity is very popular software in the audio editing space and is being used by beginner podcasters and musicians to professionals.  The recent changes in the privacy policy under the new ownership have led to accusations that it is spyware now. The new policy states that alongside collecting user data for “app analytics” and “improving our app”, which is not unusual. But further in the policy statement, it’s mentioned that the data collected will also be used for “Legal enforcement”.

The policy is a little unclear and it states:

“It may share personal data with “any competent law enforcement body, regulatory, the government agency, court, or other third parties where we believe disclosure is necessary.”

https://www.audacityteam.org/about/desktop-privacy-notice/

The language used quite vague but roughly it can be interpreted that Audacity will share data if requested by the law enforcement or court order. But they can also transfer more data if there is a potential buyer or merger in the future.

Another concerning change is the banning of under 13 years old users which was not a case earlier. This also violates the license under which the software is currently distributed.

It has been a concern for many users of the program. But this also raises a bigger question about data collection. This also hints at the intention of the purchase. The software already has a user base of millions and the potential of data collection is rather high. The policy to further distribute the data to third parties is a decision that is being bet with the most criticism. Another thing to understand is that Audacity is a small lightweight piece of open-source standalone software. But with this new policy, the software might no longer remain offline software. But these are still speculations.

Some years back similar instance occurred when Oracle Corporation had acquired a very popular office suite: Open Office. As users and contributors were not happy with the changes under the new ownership of Open Office, a new fork of Open Office was created. Contribution for this new Open Office alternative, Libre Office had increased in a very short period of time and it emerged as a viable successor of Open Office. Soon Libre Office also replaced Open Office in most of the future Linux distributions. There is already a new fork of Audacity and it is being actively worked on.

But we can also interpret it as a case of incorrectly drafted writing. There can be a possibility that the language used in the new policy changes was understood differently and things got overblown.

References:

How Archives transformed in the Digital era

The Word archive is derived from the Greek word ‘Arkheion’. The word was further referred from ‘Archon’, which meant a magistrate who oversaw the town hall where all the official public documents were stored. The Word Archive came to use for the first time in the 17th century.

Archives are also known as ‘memory institutions’ because they record and preserve memories and form a significant part of culture, community, official and unofficial history of any place or region or state, or any institution. Their function is to collect, store and preserve artefacts and documents of historical, cultural, and legal importance from the yesteryears and the present so that they remain accessible, informative, and useful to future generations. In general, any organization, government institution or individual can build archives. The National Archives, UK has described archives as “collections of records or documents, selected for lasting preservation due to their historical value, significance as evidence, or as a source for research studies”. International Council on Archives (ICA) has defined archives as “documentary result of various human activities conserved for its long-term value”. They further described archives as contemporary accounts created to can provide a true and verified version of past events.

The significance of the archives lies in the orderly collection of crucial source documents accumulated over an individual’s or organizations’ lifespan and preserved, which can serve as evidence or reference for future work. As archives are the repositories housing various historical documents and records of value, archival research is facilitative for scholars and researchers looking for data to assess and facts to study from the original documents. However, owing to the vastness and diversity of ample archival documents and records, archival analysis is a hectic and tedious job. Access to the artefacts and documents stored in an archive is not an easy task and requires permission from the respected authority. In addition, most of the information stored in traditional achieves is paper-based and thus, is susceptible to decay with time. The aforementioned limitation of traditional archives can be overcome by archiving documents and artefacts in various digital formats, which can ensure that the information is preserved for a substantially longer period.

With the advent of newer digital technologies, it became easier and more convenient to store and preserve the information in the digital space. With the assistance of new digital tools and methods, the process of transcending information from the physical world to the digital world became much efficient and easier. 

Digital archiving is an area where the relationship between digital tools/methods and information preservation can be witnessed. It is a blend of the former and current storage of information. Their function is similar to traditional archives, as repositories of elaborated collection of information in various digital formats at a virtual location. This also makes digital archives more accessible and democratic as the physical constraints are eliminated.

Advantages of Digital Archives:

  • The the digital archive allows “anywhere-anytime” accessibility to users ensuring a reduction of time, cost, and money.
  • The redundancy of information stored in digital archives can be reduced, which can promote ease of access.
  • No geographical site is required to build a digital archive, which is cost-effective.
  • The simultaneous requests of access from multiple users can be addressed by creating multiple copies of information stored, which can overcome the issue of bottlenecks encountered in traditional archives.
  • Managing and navigating objects or records stored digitally are easier in digital archives, which allow developing capacity to preserves terabytes of information.
  • Digital archives are less subjected to bureaucracy like traditional archives, which can ensure data accessibility to the general public.

Digital archives are not perfect. Many times due to the digital divide and other constraints, researchers are not able to access the information. Sometimes the information challenges the authority and due to this, the information can be unavailable in the digital archives because of censorship. But there is no denying that digital archives have transformed the way information is stored and processed.  

References:

Baltic Countries and their economic transformation

Baltics, also known as the Baltic States is comprised of three countries including Latvia, Lithuania, and Estonia. The three countries are situated on the eastern shores of the Baltic Sea. In 1991 the regional governments of Lithuania, Latvia, and Estonia declared independence from the Union of Soviet Socialists Republics (USSR). Three countries have a collective population of just over 6 million. The three have been one of the better examples which have been progressing well after the breakup of the USSR. Many other former Soviet republics have been suffering the disarray of corruption and political instability.

In 2002 Baltic countries applied for membership in the European Union (EU) and by May 2004 all the three countries joined the EU. They also gained membership in NATO by March 2004.

Downtown Tallinn

Baltic independence in 1991

It’s truly astounding how the three countries have developed since 1991. None of them were independent since 1940. The three countries had large Russian minorities and many Soviet soldiers were still stationed there. There were no major national institutions and banking infrastructure with a crumbling economy. There was a growing homegrown national moment against the ruling government since the 1980s. The homegrown fronts won the republican parliamentary election against the ruling party in early 1990 and were allowed to govern but with limited power. The Russian president at that time, Boris Yeltsin had not contested their newly declared independence in 1991. The Baltic also witnessed no violence when the three governments had declared their independence.

The three nations also had almost no natural resources, unlike USSR which was resource-rich. They were still in a very vulnerable situation with a small population and no military of their own. Even though the countries were linguistically distinct with different languages, but people in all three countries had a united drive to strive for a better future. The three had implemented reforms with a shared vision. The governments of the three shared many policies, ideas, and experiences. The Baltic States also valued their new independence with a lot of enthusiasm and didn’t take it for granted. The other ex- USSR countries often had to ask for assistance from Russian Federation and also formed new alliances with the Russian government. Baltic countries on the other hand tried to stay away from joining the post-Soviet Commonwealth of Independent States. In the subsequent years, all the three countries adopted radical economic policies and Estonia was the first mover and Latvia and Lithuania would follow suit. In 1994 Estonia introduced a flat income tax at just 24 percent and the other two also implemented the policies. Currently, Lithuania has a tax rate of just 15 percent which is one of the lowest. With early and fast deregulation and privatization, the Baltic countries were able to capture a large amount of foreign direct investment. Estonia also radically transformed its public sector with various digitalization implementations and less reliance on paperwork. Latvian and Lithuania’s transformation in this area was not as drastic but after some time both of them followed Estonia’s footsteps.  Transparency International ranks Estonia No. 17, Lithuania 37, and Latvia 42 out of 175 countries on its Corruption Perception Index for 2020. This is a commendable ranking considering they all the three are a relatively new entrant to the EU and many other EU countries have lower ranks than the three.

Success attributions

The success can also be attributed to the generous support that the three countries received from the international community and funds granted by the EU, World Bank, and the IMF. In 2008 Baltic suffered from the global economic crisis. The three soon adopted the Euro as their currency to avoid any future liquidity freeze issues that they experienced at that time. The economies al the Baltic rebounded quickly and due to good monetary measures, the three have a very low public debt. Baltic governments have also made swift progress in the Education sector and the three have attained commendable rankings in the Program for International Student Assessment (PISA) of the Organization for Economic Cooperation and Development (OECD). Estonia has done a very commendable task in this area with top 10 rankings in many assessments.  But the Baltics also face many challenges with population loss due to low birth rate and emigration. Proximity and hostility with Russia still is a challenge that the tiny nations have to endure.        

Short and Long form content: Their relevance

The terms short and long-form content gets thrown around a lot in the realm of the internet but if we were to clearly define and differentiate both type of content. Then it becomes much harder to discern between the two.

For instance, if we were to describe, how many word counts define long-form content. Then we will notice that there is no particular amount of words that will fulfill those criteria. For some content creators, 1000 words might be long and for some, it may be short. This same principle can be applied to the viewers as well. Due to these many experts consider long-form and short-form as more of beliefs rather than any specific word count. The word count can also vary according to the type of content and subject.

Now let us transition into the social media platforms instead to understand this phenomenon better. In a nutshell, the rise of short-form of content and marketing sounds likes a great deal but we have to look at the underlying challenges and opportunities that come with it. Tik-Tok, Instagram, and many other platforms have become quite popular but somehow YouTube still manages to be relevant. Why is that?

One of the biggest strengths of YouTube and its encouragement of long-form of content is the higher level of engagement. Long-form of content is easier to monetize, instead of one ad banner in a one-minute video a creator can add multiple advertisement banners in a 20-minute video. Advertisers prefer that and so does the platform, this also incentivizes the content creator to make long-form content. In the advent of the pandemic, many platforms that focused on short-form content have suffered. An example of this is ‘Quibi’, which started as a Netflix alternative on the go, and many people were not traveling and restricted to their homes due to the restriction and lockdowns. The relevance of Quibi shifted to a very volatile situation. Because when a person is at home, he/she prefer to watch a full episode instead of a brief version of that content. We cannot determine the exact failure of Quibi but it’s certainly one of the major contributors to this.

When does short content Work?

This is not to say that short-form content does not work, because the popularity of platforms that cater to short-form content is thriving especially when it comes to images and short videos. This means that it’s very crucial to determine the target audience. Although Tik-Tok and Instagram might not have the original content and monetization possibilities as compared to YouTube. They are still the go-to platforms for many new creators and numerous advertisers.

But things are a little different when the content is in written form. It’s also essential to understand when the short-form content works and when it does not. For instance, Buzzfeed focuses on large volumes of shorter content. Another thing to keep in mind is that multimedia is an integral part of Buzzfeed content and sometimes the content only acts as a supplementary shell for their multimodal content.

Due to a busy work-life balance, people tend to have less time to spare, and short-form content seems like the right choice. It also works well on mobile application platforms. Applications like ‘inshorts’ and ‘dailyhunt’ have perfected the short-form content delivery. While working on short-form content, there are a few more prerequisites that are necessary. For example, the content needs to be on point and needs to convey the whole story succinctly. To keep the engagement the need for multimedia inputs also becomes more important. Due to the digital transformation of the news, the time cycle of news has become compressed, readers demand any kind of news instantly and news is now consumed in binges.

Things to keep in mind and is it good or bad?

Short-form content also requires short-form marketing. This is a challenging area where advertisers are trying to keep the users engaged but they also have to ensure to not annoy them with on the face and bombastic ads. This is one area that is still not well received by the consumers.

There is no particular answer to the question if the rise of short-form content is good or bad. This all depends on the context of the problem. While designing and creating the content, the writer will have to determine the target audience. This essentially solves half the problem. There is a demand for short-form content but this is more relevant concerning video and photo content sharing. Written short-form content needs to be accompanied by multimedia to make it enticing. While the heading needs to be attractive but overly clickbaity headings should be avoided. The epidemic of fake news in instant messaging platforms like Whatsapp has left a sour taste in people’s mouths.

Many predicted that desktops will be wiped out as smartphones arrive but desktops are still relevant and even though a mobile-first strategy is preferential, almost every major website still offers a desktop equivalent to their mobile counterparts. The reason is simple, less bounce and significantly higher average time on the site.

The same can be applied to the long-form content, with long-form content there are more possibilities compared to short-form content. There can be additional content and features like good or bad, or pros and cons, just like I’m doing here! Various examples, references, graphs, etc can be complimented to the content. It also becomes easier to organize content with numerous subheadings to make content much more intelligible. This is not to say that the longer the content is necessarily better because sometimes long-form content results in a point of diminishing returns. In many cases, the topic and subject are much more important than the form of content. Content relevance, internal linking structure, User Experience design, pictures, videos, and the quality of a website or application are very important as well.

In conclusion, we can say that both short-form and long-form of content are relevant, the only catch is to determine which type of content applicable in that particular situation. Short-form content platforms have seen a huge surge of popularity but on the contrary, marketing and monetizing of short-form has their own sets of problems that are needed to be addressed.

Semantic Web: The Next Step of World Wide Web

In 1989 Tim-Berners-Lee invented the internet as we know it today and the fundamental building block for this framework is the hyperlink. With the use of hyperlinks, different documents are connected and any document on the web can be identified with that link. This is also known as Web 1.0 (Web of Documents) and its main goal was to exchange information between different machines together on an interconnected network.

The Semantic Web is a collaborative effort led by the World Wide Web Consortium (W3C). Semantic Web which is also called Web 3.0 or web of data links to a specific piece of information contained in that document or application. The semantic web is also modular and dynamic because if the information is ever updated, users can automatically take advantage of any updates. Scalability is an essential requirement of the semantic web as well. In the semantic web, we go beyond the documents and we go towards the lower (data) level.

Some of the main underpinnings of The Semantic Web are as follows:

  • Building models: the quest for describing the world in abstract terms to allow for an easier understanding of a complex reality.
  • Computing with knowledge: the endeavor of constructing reasoning machines that can draw meaningful conclusions from encoded knowledge.
  • Exchanging information: the transmission of complex information resources among computers that allows us to distribute, interlink, and reconcile knowledge on a global scale.

Linked Data is used to connect the web of data in the Semantic Web. Links are made so that a user or a computer can explore the web of data. Linked data is much more interactive, visibility, powerful and useful in retrieving, finding, and determining its relation with other data on the web.  So instead of having URLs (Links) between documents, in Semantic Web, we have URLs between facts. To present knowledge about the data in a much more organized manner. It also seamless data integration and it can bring intelligence to the system.  URIs consists of two entities: URL (Uniform Resource Locator) and URN (Uniform Resource Name).

The Basic Structure of The Semantic Web

To implement these URIs we need Resource Description Frameworks (RDFs). RDF is a standard model for data interchange on the Web. It is a framework or a data model for describing resources. RDFs are the formal language to describe the information in the Semantic Web. The goal of RDF is to enable applications to exchange data on the Web while still preserving their original meaning. RDFs comprise  triples. A triple gives a unique identifier so that we can link the data and form a relation between various other data nodes. Multiple triples connected are called Graphs.

In metadata terms, RDF and expressed in (Triples). Triples comprise of three fundamental entities:

  • Object is the resource being described by the metadata record
  • Predicate is an element in that record
  • Subject is a value assigned to that element

SPARQL (SPARQL Protocol and RDF Query Language) and OWL (Web Ontology Language) are the other two technical standards used in the Semantic Web.

Semantic the web is an extension to the World Wide Web and it has made significant strides towards making the internet more seamless, efficient, and scalable. Linked Data is very critical in making this happen. But still, Semantic is not yet adopted and many corporations and organizations are unaware of it. So the focus should be to promote wider adoption of the Semantic Web with better availability of the learning resources.

The fragile ecology of the Himalayas

On 7 February 2021 Uttarakhand’s Chamoli district experienced a disaster in the form of an avalanche when a small portion of the Nanda Devi glacier broke off. The sudden deluge caused considerable damage to NTPC’s Tapovan-Vishnugad hydel project and the Rishi Ganga Hydel Project. At least 72 people were confirmed to have been killed in the disaster. But this is not a new phenomenon and every year there are many reports of sudden deluge all across the Himalayan region.

The Himalayas has maintained the climate of the Indian subcontinent. Himalayas act as a barrier by diverting the monsoons to pour the rain in the fertile northern pains rather than to drift away to further north. Similarly, the mountain range also blocks the cold northern winds to reach the Indian subcontinent. The Himalayas all the way from Afghanistan to Myanmar with 110 peaks over 24,000 feet. They are also very rich in biodiversity and are the source of numerous perennial rivers and water bodies. Rivers like Indus, Ganges, and Brahmaputra that originate in the Himalayas are the lifelines of millions of people in the subcontinent.  

But in recent years, the Himalayan region has seen a drastic transformation with increasing population and deforestation. The Himalayas are still a very young mountain range and this means the region is not as stable as older mountain ranges. This is also the reason for the high number of earthquakes. There are many exploitative projects and resource extractions initiatives have are going throughout the region. The increasing influx of tourists in the Ladakh region which is increasing the pressure in the already sensitive region or the limestone extraction near Mussoorie which has transformed the surrounding lush mountain region barren and unstable are just some of the instances.  The cities located in the periphery of the Himalayas have started are also facing the same degradation problems in the plain region. Due to ever-increasing population growth, the size of cities is also increasing and this means overflowing garbage and drains. Unplanned growth of new settlements and uncontrolled tourism has only exacerbated this issue.

Photo by rasik on Pexels.com

Steps to safeguard the region

There is a need for safeguards on a national level that would help in preserving the fragile ecology of this region. First, it needs to be ensured that there is sustainable urbanization in the mountain habitats by town planning and adoption of architectural norms. Due to the sensitivity in this region, it is imperative that we have to control the growth of new settlements in the region and the existing settlements should be developed with all the basic urban facilities. Solid waste management is another area that needs to be the focus. Plastic bags use should be banned in all the towns and villages in the Himalayan region. Some states like Himachal Pradesh and Sikkim have enforced this rule but there are still many other states that have not fully implemented this rule. Pilgrimage is an important part of the tourism sector in the Himalayan region. Sustainable pilgrimage needs to be promoted and the inflow of pilgrims has to be determined according to the ecological capacity of that site. Roads are an essential node for the connectivity and development of a region but the construction of the roads and highways needs to take into account the sensitivity and fragility of the region as well. Environmental impact assessment should be compulsory before the construction of roads. Finally, environmental awareness needs to be propagated so that every individual can be empathetic and mindful of the dangers of environmental degradation. A coordinated effort will be essential between local cultures, local people, unions, and state governments to make this happen.

References:

http://www.ipcs.org/comm_select.php?articleNo=582

Permaculture

Permaculture is a fusion of the words ‘Permanent’ and ‘culture’. The term was devised by Bill Mollison and David Holmgren in 1978. In Mollison’s words, permaculture can be defined as the “conscious design and maintenance of agriculturally productive ecosystems which have the diversity, stability, and resilience of natural ecosystems. All this is achieved with a harmonious integration of landscape and people sustainably. The farms are designed in such a way that it promotes the coexistence of competing plants species. Currently, more than 3 million people practice permaculture across 140 countries.

Permaculture benefits claims

The practitioners of permaculture claim that as the population is increasing, there is increasing pressure to produce more food. The modern method of monoculture is not a sustainable method of growing food where a large area of land is used for only one crop and lots of chemical fertilizers are required to sustain the crop production. This also puts immense pressure on the topsoil and the soil loses its fertility and more fertilizers are required to maintain the productivity and output. Monoculture is discouraged by permaculturists because it promotes farming with a commercial-driven mindset and only selected varieties of crops and plants are grown that are commercially viable. Sometimes wild and uncultivated foods like tubers and millets are sidelined from the people’s diet even though when they are just if not more nutritious than any other food. Practicing permaculture can help small farmers to be more self-sufficient in producing their food and not rely on external input. Farmers also get the opportunity to grow large varieties of fruits, grain, and vegetables under a single roof. But it’s more than just self-sufficiency and the farm itself generates manure and this helps in saving the fertilizer costs. There is even more, as perennial plants are a structural part of the permaculture, this means that plants don’t require regular tending. This reduces the labor expenses as well. It also allows the plants to endure harsh weather conditions like the heavy downpour in monsoons or winters.

Challenges and future

Modern conventional agriculture science has been a boon in terms of production quantity as a whole but still, there are lots of problems that we are facing right now due to the use this form of farming. The focus should also be on the quality first and then quantity. What modern agriculture science has done is that it has separated the farmer from the soil. The focus and research are on the yields and nutritive properties of plants. Food has to come from the soil and most of the solutions are available in nature itself. Permaculture provides a pragmatic and efficient way for our subsistence farmers to produce food. In India where small farmers are the majority and they will also face immense pressure from the dangers of climate change and the increasing constraints on resources, epically water. Then there is the monetary issue as well. Permaculture helps in this case as the food is closer to the producer and there is less wastage of food. This makes food production economical and sustainable in a long run. Still replacing permaculture with traditional agriculture will not be easy and practical, but with small steps, it can emerge as a viable way to produce food and maintaining the ecology of the planet.

Virtual Reality: Its history and future

Virtual Reality has a brief yet rich history with many ups and downs. Even though the formal name was defined much later, there had been many attempts that resembled the Virtual Reality that we know it as today. Let’s first define the goal of Virtual Reality. It is to trick the brain to believe that something is real with the help of virtual elements, these elements can be auditory (sound) or visual (sight). But there are many parallel definitions of VR but one of the essential differentiating factors of Virtual Reality compared to other forms of media is that VR has some sort of interactivity. Unlike Movies or 3D movies where a person can only view but not interact, VR facilitates the freedom to touch, interact and control what a person sees on their screen.

History of VR

The fascination with VR goes way back to the 1930s when science fiction writer Stanley G. Weinbaum wrote a story where he mentions about Pygmalion’s spectacles. The wearer would be able to experience the virtual world. VR was further popularized by the popular Sci-fi movie, Star Trek: The Next Generation and its Holodeck.  

­One of the first examples of VR HMD (Head Mounted Display) was ‘Sword of Damocles’ developed by Ivan Sutherland and his student Bob Sproull. The HMD was connected to a computer. The contraption setup was intimidating, cumbersome, and heavy. The graphics that were shown if the HMD were quite simplistic and trivial, but it was a convincing step towards the VR we know today. The term ‘VR’ was popularized in the 1980s by Jaron Lanier. By the end of the 1980s, NASA with the assistance of Crystal River Engineering, created Project VIEW. It was a VR simulator that was developed to train the astronauts. The 1990s saw the use of VR in multimedia and mainstream commercial spaces. Numerous virtual reality arcades were introduced in the public spaces where players could play games with immersive stereoscopic 3D visuals. The mid-1990s saw the VR foray by console manufacturers. Nintendo and SEGA both showcased their VR gaming headsets but both were a commercial failure due to technical limitations and lack of software support.

In 2012 Oculus Kickstarter had raised 2.5 million dollars and this gave the startup a monetary jumpstart that previous VR projects were not able to attain. In 2014 Facebook bought the Oculus and this ensured that the VR startup would be adequately funded in their VR developments. 2014 also saw the launch of numerous other VR developments like Google cardboard, Sony PSVR and Samsung Gear VR. In 2016 HTC released its advanced VR headset, HTC Vive. Now the focus was to make VR truly standalone free from the assistance of a dedicated computer or a smartphone.

The Future

The Future of VR looks bright, there are many factors for this but one of the major factors is the price of VR has significantly gone down. There are continual developments in the sphere of VR and various new technological innovations are attempting to make the adoption of VR much more seamless, comfortable and intuitive to use. The use of VR is not just limited to gaming but now VR is also used for many commercial and business purposes. Recently Microsoft signed an agreement with the US government to supply 120,000 semi-custom versions of Hololens VR/AR headsets. VR is increasingly used in the health and manufacturing sectors as well. With a compound annual growth of 21.6% from 2020 to 2027, it seems that VR is only going to get more mainstream in the future.

References:

https://nix-united.com/blog/the-past-present-future-of-virtual-reality/

https://www.vrs.org.uk/virtual-reality/history.html

The issue of Electric Vehicles and their sustainability

Tesla launched the Model-S in 2012, the luxury car was one of the more mainstream vehicles that accelerated the growth of electric vehicles. Some traditional cars manufacturers also followed the suit to compete with Tesla. Fast forward to a decade later, electric cars have become even more relevant and every major internal combustion engine manufacturer has an electric car model in their portfolio.

The rise of electric cars has been commendable with 75% growth rate and current sales north of 3 million units. But we have to look at the sustainability of electric vehicles realistically. Internal Combustion Engines cars have come a long way from 20 years back. Conventional cars are significantly more fuel-efficient and release less harmful gases to the environment. But still, they are incomparable to electric vehicle zero fuel emissions.

When we talk about electric vehicles, we also have to consider the whole infrastructure that is required to sustain that. The elephant in the room is the batteries. Battery technology has progressed a lot in the past decade but still, there are lots of limitations that have hindered the adaptability of EVs. One of the biggest issues that EVs face is the limited lifespan of batteries. The average lifespan of a typical EV battery is approximately 10 years depending upon the usage. In many EVs, the replacement of batteries is very difficult or almost impossible. Another problem is the case of recycling batteries. It’s not easy to recycle batteries and currently, electric vehicles have a very small percentage of market share. But as more and more people adopt EVs, there will be more EVs that will have to be scrapped and the proper disposal of batteries will be required. This can be a cause of environmental concerns as batteries will accumulate with no proper arrangement for its recycling.

Issues that will have to be addressed

The problem is much more than just battery technology. The power delivery and infrastructure also need to be developed to support the EVs. It’s going to be easier in urbanized areas with a small population, for instance Norway has been moderately successful in adopting EVs as a standard with plans to totally cease the sales of internal combustion engine vehicles by 2025. This target is going to be much more difficult in large countries with large populations and rural populations where distances between cities are larger. It also requires a considerable amount of capital resources to make the transition possible. Currently, traditional gas vehicles are still more viable, practical, and cheaper than EVs. This tells us that EV manufacturers and the government will require much more than subsidies to convince people to convert. EV manufacturers will also need to control the amount of energy that is required to produce a single EV, which is much more than a gas vehicle.

The extraction of lithium is also a contested issue and just as fossil fuels, the elements that are required to make batteries are non-renewable. Lithium can be extracted in a limited capacity and with more demand, it will become even more challenging to supply the raw materials required to build a battery. Building new battery production factories will also require a considerable amount of time and money. Until battery production facilities are not increased, supplying batteries will be a challenge and mass adoption will not be as fast as we would like it to be.

In conclusion, EVs are certainly the future, they are cheaper to operate and have zero emissions. But there are many other issues like infrastructure, battery supply, and proper disposal that would have to be addressed.

References:

Steam : PLATFORM THAT CHANGED PC GAMING FOREVER

Steam is a digital marketplace and is owned by Valve Corporation. Valve Corporation was formed by ex-Microsoft employees and after launching critically acclaimed games like Half-Life and Counter-Strike, they set out their eyes on retailing software through the internet.  

History of Steam

Steam was launched as a standalone software client in September 2003 as a channel to provide automatic updates for their games.  It pioneered the digital distribution of the software. Before video games used to be physically owned and any kind of updates and patches were very cumbersome to implement.  As owning music CDs became obsolete similarly owning games in physical media became a rare thing. As internet speeds increased, this meant that downloading games over the internet became easier, and having a single library where the users can access all their purchases has now become the norm.

Later it expanded to provide software by third-party developers to become a full-fledged distribution platform. The developer of Steam is Valve Corporation.  As of today, it is one of the most profitable privately owned corporations in the world.

Steam’s business model is very similar to the model that Apple uses in its app store. It operates a commission-based business model. Steam takes a percentage cut from all the sales made on its platform (30%)

How Steam grew from its initial years?

During In its initial years, Steam provided a freemium model.  It offered its games for a free-to-play model. It helped Steam to reach a wider audience and increase its growth.

Steam has benefited a lot with its network effect. As more and more game publishers and smaller indie game developers joined the platform. Steam increased its titles library. New developers got a medium to publish their games without the hassles and cost of hosting and maintaining the games. With the help of free games and new value units being added all the time, new users joined the platform at an ever-increasing rate as well.  Now consumers could access their whole game library in a single place. Thus benefiting both these sides of the core interaction.

Currently Steam has 100 million monthly active users and over 30,000+ games listed on its platform.

In the initial years Steam solved its chicken and egg problem by providing free games and free demos of games. The strategy of seeding by providing free updates and discounts that were not available in physical media helped Steam to become a viable option. The Same strategy is also followed by the Fortnite developer and Steam rival Epic Games Store which was launched in 2018. It gives away a free game every week. This has helped Epic to get a significant foothold in the market dominated by Steam. Even though Epic games store lacks in the number of feature layers that Steam provides. Valve had realized that its user base is a very valuable asset. To increase the engagement in the platform it added community forums. Where fellow users could discuss and help each other with any topic. Steam also introduced many new features including statistics tracking system and friend list. Now Steam was shaping up like a social media platform over its core interaction as a mere platform.

In 2008 Steam introduced a filtering system. That helps users to better find their desired product. Now the catalog could be browsed according to the genre. In 2012 Steam introduced Steam Guard.  This included two-factor authentications to curb frauds and also launched its mobile app. In 2012 Steam introduced Steam Guard.  This included two-factor authentications to curb frauds and also launched its mobile app. In 2016 Valve introduced the support for VR headsets. It collaborated with HTC to introduce the Vive headset.  Later extending support for Oculus Rift.

Present and Future

Recently Steam launched a new VR title, Half-life: Alyx with the introduction of its new VR headsets. It’s one of the best virtual reality applications to date and the game received critical acclaim from reviewers and users alike. Valve is again trying a paradigm shift for pushing Virtual Reality to the mainstream and so far they have been successful in it.

Steam was one of the first digital marketplaces and they are continually working on improving their platforms. Even with new competitors, Steam has been a relevant force for over 15 years. New technological ventures and platforms can learn a lot from Valve with its drive to be innovative and ambitious in its approach.

References:

Global chip shortage : An Analysis

Not many industries have suffered the fate of disarray as the chip industry after the advent of the Covid pandemic. Things were not great for chipmakers in 2020 due to the pandemic but instead of seeing any signs of improvement, 2021 has been even worse for the industry so far. There has been a deficit of chip supply as compared to the demand and it’s not just the electronic industry that is going through a rough phase but it’s many other industries as well. Unlike a couple of decades back when chips were mainly present in personal computers and specialized electronic appliances and gadgets. Now chips power the world. One of the worst-hit industries has been the automobile sector. When the first wave of Covid-19 hit the world, global sales of cars had dwindled, and to compensate for this, car manufacturers had lowered their chip orders from the manufacturers. These chips are a requirement for assembling the critical electronics and computers that are inside modern cars. 2021 saw a sudden increase in automobile sales and this caused the disruption in equilibrium of chip supply and many automobile manufacturers started giving large orders and chip fabrication plants like TSMC were unable to cope up with the sudden growth in demand. This parallel demand for chips has increased the backlog and even though the chip manufacturers are operating over time, they have not been able to keep up with the demand. Now even home appliances might face issues with their chip supply.

There is one more important aspect that we have to address and it’s the increased demand for electronics items after the pandemic. As many people were and are still stuck in homes, they are buying computers, consoles, televisions, and various other electronic components. Many companies have not been able t keep up with the demand due to this. Graphics card is one of those elusive items that have suffered a double whammy. Both potential gamers and crypto miners want their hands on the newest graphics card but due to a shortage of chips card manufacturers are not able to keep up with the demand. There have been many cases where individuals and groups are scalping (buying in bulk) these new cards and reselling them at much higher prices.

Basic appliances and car components often use chips manufactured with older technology. For instance, PC and smartphones are using 7nm manufacturing whereas cars manufacturers uses older 32 nm or 14 nm technology because they are comparatively cheaper to manufacture. But due to the shortage in supply, manufacturers are prioritizing their newer chips and it’s getting challenging to allocate resources for older manufacturing processes. Due to this many car manufacturers have scaled-down on the extra amenities in their models.

TSMC (Taiwan Semiconductor manufacturing company) is one of the biggest chip manufacturers in the world. The company produces 60 percent of the world’s chips for automobiles and 92 percent of cutting-edge chips. Recently Taiwan is experiencing its worst drought in over 50 years. A high quantity of water is required to clean the wafers during the manufacturing of the chips. Droughts have only increased the problems in the manufacturing of these chips. There is immense pressure when most of the world’s chips are made in one single building. This also exposes the problem of relying on a single source of manufacturing. Due to globalization and completion, most of the world’s manufacturing shifted to Asia. The issue of chip shortage will most probably remain next year as well. Intel (U.S) is has started to set up two new manufacturing plants in Arizona. This comes at a time when many have realized that a concentrated source of manufacturing is not the most reliable thing and diversification is the only way to deter any future shortages.

Video games and Education : How to bridge the two

Videos games have become a significant part of our culture for over half the century. They have also lead to many leading artistic and technical endeavors including many innovations in all these years. People have started to recognize the importance of these games as culturally important and the need to curate and store them properly. 

Education is one of the areas that have gone through a transformational change in 2020 and forward. The learning space has transitioned from a physical space to a space that has gone digital. The convergence of various technologies and modalities has given birth to a new space in the education system.

When it comes to Computer games, many used to scoff at them as a mere product for brief entertainment, but the increased proliferation of digital in every individual’s life means that Video games have a more significant role to play than ever before. The increased visual fidelity with better computing power has signified more immersion in the digital world. Online education has been given a push from governments around the globe. The majority of the higher institutions are teaching remotely with the help of different online tools. One of the major challenges that many educators face is the problem of engagement on par with the physical class.

Instead of just looking at online education as an alternative for the physical classes, we have to look at online education as a means of learning that can enhance the experience and engagement of the students than the physical classes as well. The technologies in 2021 clearly indicate that various tools and measures can be added into the experience of not just online learning and but also in the space of cultural heritage and digital tourism. Video games are an important tool that educators can leverage to fulfill these requirements.

Engagement and immersion can be the key factors that can drive the education system forward. There are many instances where students skip a certain subject even before attempting to learn it. This can be due to a poor and unfavorable experience with a certain instructor or wrong assumptions about the difficulty of that subject. Engaging interfaces in the form of creative games, virtual reality, augmented reality, mixed reality can help alleviate these types of issues.

Human-Computer-Interaction is an important field where the aspect of design and technology converge. This area not just deals with technological issues but also with the psychological and socio-cultural problems while designing/building a product or an interface. The interface is an important area that is often overlooked on many platforms. Many Educational platforms don’t offer the freedom that can help students, but they rather use the stingy design philosophy and force the users to a particular behavior while using it. Therefore the interface of games becomes equally important. Games can be used to create a more participatory environment for both the teachers and students alike while increasing the experimentation and systematic thinking in the class.

Video games are usually played to win or accomplish a level. Players are motivated by winning these challenges and this is the key to stay engaged. Games motivate through fun, which is part of the natural learning process in human development with instant and visual feedback.

We can conclude that the implementation of video games in the realm of education can make online learning more engaging and intuitive for learners. This is still a novel field of research and we have a long way to go but we cannot dismiss the numerous possibilities that games can provide us in this area.