The Evergreen Mughal Architecture

Mughal architecture, is a characteristic Indo-Islamic-Persian building style that flourished in Northern and Central India under the patronage of the Mughal emperors from the mid-16th to the late 17th century. This new style combined elements of Islamic art and architecture, which had been introduced to India during the Delhi Sultanate (1192–1398) and had produced great monuments such as the Qutb Minar, with features of Persian art and architecture. Mughal monuments are found chiefly in the northern parts of India, but there are also many remains in Pakistan. The Mughal period marked a striking revival of Islamic architecture in northern India. Under the patronage of the Mughal emperors, Persian, Indian, and various provincial styles were fused to produce works of unusual quality and refinement.

The tomb of the emperor Humayun (1564) at Delhi inaugurated the new style, though it shows strong Persian influences. The tomb was designed by a Persian architect Mirak Mirza Ghiyas. Set in a garden at Delhi, it has an intricate ground plan with central octagonal chambers, joined by an archway with an elegant facade and surmounted by cupolas, kiosks, and pinnacles. 

The first great period of building activity occurred under the emperor Akbar (reigned 1556–1605) at Agra and at the new capital city of Fatehpur Sikri, which was founded in 1569. The latter city’s Great Mosque (1571; Jami Masjid), with its monumental Victory Gate (Buland Darzawa), is one of the finest mosques of the Mughal period. The great fort at Agra (1565–1574) and the tomb of Akbar at Sikandra, near Agra, are other notable structures dating from his reign. Most of these early Mughal buildings use arches only sparingly, relying instead on post-and-lintel construction. They are built of red sandstone or white marble.

Mughal architecture reached its peak during the reign of the emperor Shah Jahan (1628–1658), its crowning achievement being the magnificent Taj Mahal. This period is marked by a fresh emergence in India of Persian features that had been seen earlier in the tomb of Humayun. The use of the double dome, a recessed archway inside a rectangular fronton, and parklike surroundings are all typical of the Shah Jahan period. Symmetry and balance between the parts of a building were always stressed, while the delicacy of detail in Shah Jahan decorative work has seldom been surpassed. White marble was a favored building material as is evidenced with the Wonder of the World. After the Taj Mahal, the second major undertaking of Shah Jahan’s reign was the palace-fortress at Delhi, begun in 1638. Among its notable buildings are the red-sandstone-pillared Diwan-I-Am (“Hall of Public Audience”) and the so-called Diwan-I-Khas (“Hall of Private Audience”), which housed the famous Peacock Throne. He established Delhi as his capital (1638) and built there the famous Red Fort (1639) which contained the imperial Mughal palace.

The architectural monuments of Shah Jahan’s successor, Aurangzeb (reigned 1658–1707), were not as numerous, though some notable mosques, including the Badshahi mosque in Lahore, were built before the beginning of the 18th century. Subsequent works moved away from the balance and coherence characteristic of mature Mughal architecture. In general, however, Mughal architecture had begun to decline during his reign, a process that would accelerate after his death. “Architecture, of all the arts, is the one which acts the most slowly, but the most surely, on the soul.” The Mughal architecture truly was a revolutionary blend of different cultures and till the present day. Some of the buildings constructed under the reign of the emperor’s are some of the most famous and well known and continue to inspire and attract millions towards its timeless design.

The Rise of Medical Tourism in India

In India, health care is one of the largest sectors, in terms of revenue and employment, and is expanding rapidly. During the 1990s, the Indian health care sector grew at a compound annual rate of 16%. The total value of the sector is more than 34 billion U.S. dollars in 2010 and grew up to 160 billion U.S. dollars in 2017 and was estimated to reach up to a value 372 billion dollars by 2022. A major proportion of this growth is predicted to be attributable to the growth in the business of medical tourism. According to Britannica Medical tourism, also called health tourism, surgical tourism, or medical travel can be defined as the international travel undertaken for the purpose of receiving medical care.

Medical tourism in India has gained momentum over the past few years. According to the Confederation of Indian Industries (CII), approximately 150,000 patients arrived in India in 2005 from across the globe for medical treatment in 2016, the number of visitors arrived in the country amounted to 361,000. The medical tourism industry in India was valued at around $3 billion in 2015, and it was expected to grow to $9 billion in 2020 before covid-19 struck.

There are several characteristics that make India an appealing destination for visitors seeking health services. These include its well-trained health practitioners, a large populace of good English-speaking medical staff, a good mix of allopathic and alternative systems of medicine, the availability of super-specialty centers, use of technologically advanced diagnostic equipment, and finally and more importantly, the availability of these premium services at competitive cost.

The costs of comparable treatment in India are on average one-eighth to one-fifth of those in the West. For instance, a cardiac procedure that costs anywhere between US$40,000–60,000 in the United States is priced at US$30,000 in Singapore, US$12,000–15,000 in Thailand and only US$3,000–6,000 in India. Likewise, the associated costs of surgery are also low. A study by the India Brand Equity Foundation (IBEF) showed that India is more cost-competitive than other leading medical tourism destinations like Thailand.

Health services in India have the additional advantage of providing a good mix of allopathic and alternative systems of medicine. For instance, while New Delhi has emerged as a prime destination for cardiac care, Chennai has established a niche for quality eye care, and Kerala and Karnataka have emerged as hubs for state-of-the-art Ayurvedic healing.

The opportunity for profit in this sector has encouraged several large corporations and several non-resident Indians (NRIs) to invest money in setting up super-specialty. These facilities now dominate the upper end of the private sector and cater predominantly to medical tourists and affluent sections of the society.

Even the Government of India has responded promptly to tap the potential of this sector. In its effort to capitalize on this opportunity the Government has undertaken measures to promote India as a ‘global health destination’. The National Health Policy strongly encourages medical facilities to provide services to users from overseas (Ministry of Health and Family Welfare 2002). The Indian Ministry of Tourism has started a new category of visas for medical tourists called the ‘M’ or medical visas. This program that ensures that people who need critical, life-saving treatment should get their visas in a timely manner. Being able to easily obtain a visa is one of the primary reasons that people seeking treatment come from abroad to visit India to get that treatment that is provided at a reasonable price. Having procedures done in a timely manner has saved countless lives, and is likely to save much more in the future.

If the present trend continues, trade in health services will become one of the biggest sectors in India. India has become one of the premiere medical tourism destinations in the world for many good reasons. The lower cost does not mean lower a quality of care. The quality of care compares to that received in any Western country. However, the growth of this sector could pose a potential threat to the already crippled public health system in India

Eating for the Environment

According to the Cambridge Dictionary, veganism can be defined as : the practice of not eating or using any animal products, such as meat, fish, eggs, cheese, or leather. The term ‘vegan’ was coined in 1944 by Donald Watson, co-founder of ‘the Vegan Society’. Initially, the term was used to describe ‘non-dairy vegetarians. However, in 1951 the Vegan Society has updated the definition to:“exclude all forms of animal exploitation…” Veganism seems to be rising fast as a dietary choice with the number of vegans in the U.S. growing by 600% from 4 million (2014) to 20 million (2018). However, it is still in the minority with less than 1 percent of the total world population (79 million out of a total 7.9 billion) being vegan.

  • Health Benefits

The increasing numbers of people are moving toward vegan diets due to health, animal welfare, or environmental concerns. Vegan diets tend to be rich in nutrients and low in saturated fats. Research suggests that the diet can improve heart health, protect against cancer, and lower the risk of type 2 diabetes.

Vegan diets can boost heart health in several ways. A large-scale study conducted in 2019 had linked a higher intake of plant-based foods and lower intake of animal foods with a reduced risk of heart disease and death in adults. Animal products such as meat, cheese, and butter are the main dietary sources of saturated fats. According to the American Heart Association, eating such foods that contain these fats raises cholesterol levels. High levels of cholesterol increase the risk of heart disease and stroke. Plant foods are also high in fibre, which are linked with better heart health. Animal products contain very little or no fibre in comparison. In addition, people on a vegan diet often take in fewer calories than those on a standard Western diet. A moderate calorie intake can lead to a lower body mass index and a reduced risk of obesity, a major risk factor for heart disease.

According to a 2017 review, eating a vegan diet may also reduce a person’s risk of cancer by 15%. This health benefit may be due to the fact that plant foods are high in various vitamins and phytochemicals (biologically active compounds in plants) that protect against cancers. The International Agency for Research on Cancer has also reported that red meat is “probably carcinogenic,” noting that research has linked it primarily to colorectal cancer but also to prostate cancer and pancreatic cancer. Eliminating red and processed meats from the diet removes these possible risks.

  • Environmental Benefits

Going Vegan also has multiple environmental benefits.

The high levels of carbon dioxide, methane, and other Greenhouse gasses (GHG) produced by animal agriculture generate over 14% of global emissions, greater than all transportation emissions. Studies show that adopting a vegan diet can cut agricultural greenhouse gases in half.

With greenhouse gases being the leading cause of climate change, due to the “greenhouse effect”, we should expect Veganism to help mitigate climate change. Studies have found that if everyone went vegan, emissions contributing to global warming would be cut by 70%, enough to stop and reverse the harmful effects of climate change including rising sea levels, floods, melting glaciers, and droughts.

Animal agriculture impacts the world’s biodiversity by using wild land for soy and maize crops, the primary livestock feed. The increasing use of land has led to a number of native species being threatened on a global level, including different species of monkeys, elephants, bears, tigers, alligators, lions, wolves, and parrots. Choosing a vegan diet will go a long way in preventing species extinction by eliminating the need of livestock and factory farms feed crops. Veganism provides a more sustainable agricultural model, focused on feeding people, not animals for slaughter.

  • Negative Benefits

Vegan diets tend to be rich in many nutrients, low in saturated fat and cholesterol, and also higher in dietary fibre. But there are many nutrients that those following a vegan diet oftentimes do not consume enough of. If you’re not careful, following a vegan diet can cause in the development of some deficiencies in vitamin D, calcium, omega-3 fatty acids, and zinc. These deficiencies can impact the body in a variety of ways, possibly causing a weakened immune system, a higher risk of experiencing bone fractures, high blood pressure, rashes, or fatigue.

Iron plays a crucial role in transporting oxygen throughout your body but it can be difficult to get enough of it when following a vegan diet. Heme iron (a type of iron) is found only in animal sources. Since those following a vegan diet do not consume heme iron, they must rely on non-heme iron, which can be found in plant sources, but it is not as readily absorbed by the body as heme iron is.

In addition, certain plant-based compounds can further inhibit iron absorption, making it more difficult for those following a vegan diet to consume enough iron, per a paper published in the American Journal of Clinical Nutrition in 2010. Not consuming enough iron could cause the body to feel lethargic and it could put at risk for developing iron-deficiency: anaemia, which is a potentially serious condition that occurs when your body isn’t making enough red blood cells.

Ultimately the success of a vegan diet will rest on the conscientiousness of the individual undertaking it. It may be a restrictive diet and unless we pay attention to the elements of the diet that it excludes, then we might be putting ourselves at risk of developing deficiency-related problems. It has become easier to follow with vegan-friendly food products in supermarkets, which are fortified with nutrients that can be absent from the diet.

Electronic Wallets in India

It was only after the cash crunch of November 2016 that digital payments had received their booster shot into the mainstream Indian economy. Moreover, the increased penetration of internet and smartphone usage with government programs like Digital India have paved the way for electronic payments in India. The digital payment instrument that has gained exponential popularity among both the merchants and consumers however is the mobile wallet/e wallet.

A mobile wallet is a virtual place to carry your credit and debit card information in a secure digital form on your mobile device. Instead of using your physical plastic card to make purchases, a mobile wallet allows you to pay with your smartphone, tablet, or smartwatch in stores, in apps, or on the web. All it requires is to download these digital wallet apps from the app store or play store. Some of the leading players in this space are PayTm, Mobikwik, PhonePe, Google Pay among others.

There were multiple factors that contributed to the rapid adoption rate of mobile wallets in India. The increased use of the smartphones, the internet and improved telecom and payment infrastructure were drivers for the market growth due to the convenience smartphones offer in making payments. This growth in the mobile wallet market in the Asia Pacific countries is attributed to the falling smartphone prices. The granting of PPI licences (Pre-Paid Instruments) by the Reserve Bank of India played an important role in establishing e-wallets in the Indian market.

Features such as 1-click payment that enable user to easily pay across various services has made mobile wallet appealing to the users due to its simple working. There have also been multiple incentives offered by the mobile wallet companies through discounts and cashbacks to increase their appeal among the individuals. It is the ‘cost-effectiveness’ nature of these wallets that appeals to the merchants as the cost associated with acceptance of e wallets including setting-up infrastructure and transaction fees is much lower compared to the traditional card-based payment system. Paytm, for instance, does not charge merchant any fees towards installation and annual maintenance. While it charges merchant service charges for online payments, no charges are applied on QR code based in-store transactions.

However, it has not always been smooth sailing for mobile wallets even after their boost. They still face a few of the major challenges. Mandatory KYC and still evolving policies made by the government has had some negative impact on the overall growth of the adoption of these wallets. Additionally, while medium to large-value transactions continue to be made through digital banking and cards, only low-value day-to-day transactions are carried out through mobile wallets. Mobile wallets also face certain challenges from banks in establishing their foothold who are worried that it might negatively impact their revenue. Network problems and reliable and fast internet connectivity is not available in most part of the country and poses a challenge for those situated in rural and remote parts of the country. There is also a lack of awareness about the benefits of using wallets among traditional users who perceive online payment as insecure which is fuelled by instances of fraudulent activity in such transactions.

To remain competitive against cash and other alternatives, the wallet providers are now looking beyond ‘just payments’ and focusing on value-added services. These wallets encompass additional services such as utility bill payments, mobile top-up and even gold purchases. Some of the wallet providers such as Paytm and Airtel have also begun to move to other sectors to offer banking services, after receiving the approval from RBI. Although, mobile wallets appear to have a bright future in the Indian Market, several external market factors will also have an impact on their adoption. They need to keep on innovating and take advantage of the dynamism of market economics to stay in the game.

Fake News: The plague of the internet.

The authenticity of Information has become a longstanding issue affecting businesses and society, both for printed and digital media. On social networks, the reach and effects of information spread occur at such a fast pace and so amplified that distorted, inaccurate or false information acquires a tremendous potential to cause real world impacts, within minutes, for millions of users giving the rise to “fake news”. Fake news or fabricated information that is patently false, has become a major phenomenon in the context of Internet-based media. It has received serious attention in a variety of fields, with scholars investigating the antecedents, characteristics, and consequences of its creation and dissemination. Usually, these stories are created to either influence people’s views, push a political agenda or cause confusion and can often be a profitable business for online publishers. False information can deceive people by looking like trusted websites or using similar names and web addresses to reputable news organisations.

The Rise of False Information

False information is not new; being older than the first Century BC, but under the rule of the internet and the emergence of social media has just added fuel to the fire. Traditionally we got our news from trusted sources including journalists and media outlets that are required to follow strict codes of practice. However, the internet has enabled a whole new way to publish, share and consume information and news at a rapid speed with very little regulation or editorial standards. Many people now get news from social media sites and networks and often it can be difficult to tell whether stories are credible or not. According to Martina Chapman (Media Literacy Expert), there are three elements to fake news: Mistrust, misinformation and manipulation. Information overload and a general lack of understanding about how the internet works by people has also contributed to an increase in fake news or hoax stories. Social media sites can play a big part in increasing the reach of these type of stories.

The False Information Business Model

The internet and social media have made it very easy for anyone to publish content on a website, blog or social media profile and potentially reach large audiences.With so many people now getting news from social media sites, many content creators/publishers have used this to their advantage. False information can be a profitable business, generating large sums of advertising revenue for publishers who create and publish stories that go viral. The more clicks a story gets, the more money online publishers make through advertising revenue and for many publishers social media is an ideal platform to share content and drive web traffic.

Consequences of Fake News

The spread of fake news can have both personal and academic consequences. In a perfect world everything reported would be based on facts and you would be able to trust that the media you consume is reliable. But unfortunately, that’s not the case. As a student you are expected to find, evaluate, and reference trustworthy information sources in a variety of formats. If fake news is included as evidence for your arguments or as part of your research it may raise doubts about the integrity of the sources used as a whole and the ability to identify quality information. It can also be dangerous to do something without having all the facts, but it can be just as detrimental to do so based on inaccurate information. Whether it’s political, medical, academic, or personal, a reliable source of information should be recognized to make an intelligent, fact-based choice. As more and more individuals fall for information online that directly opposes scientific research, researchers are increasingly put in the position of having to defend the validity of their findings. When information dissemination was limited to print, television, and radio there was less opportunity for individuals to publicly comment on, criticize, or refute knowledge presented by experts. With the internet, it is now possible for groups to push misinformation that aligns with their beliefs and disparage that which does not.

What can we do about False Information?

Companies like Google and Facebook have announced new measures to tackle fake news with the introduction of reporting and flagging tools.Media organisations like the BBC and Channel 4 have also established fact checking sites While these are welcome developments, digital media literacy and developing skills to critically evaluate information are essential skills for anyone navigating the internet and especially for young people.

The spread of anti-vaccination misinformation on social media, (and its implications for public health and the global fight against COVID-19) is a textbook example of how deadly misinformation can be. Misinformation can have real life consequences for individuals, businesses and public authorities. Besides the recent pandemic, fake news is another virus we should focus on tackling.

Unpaid Internships : Boon or curse ?

Internships provide the most important period for job candidates because they offer the opportunity to gain useful professional skills and work experience. Internship makes candidates mature and prepare them for future work challenges. Information from a survey that was conducted in Holland showed that while 63% of those with paid internships had landed jobs before graduation, just 37% of students with unpaid internships were successful. There is no denying that internships are of extremely valuable to students for their smooth transition from college to the job market. Sometimes procuring an internship opportunity feels like a matter of life-and-death but forfeiting an income should certainly not be the price one should pay to gain experience. Are unpaid internships actually more harmful than what may be seen on the surface as a positive experience?

Most employees offer unpaid internship opportunities or pay interns below the minimum wage. The unpaid internships impact the affected persons negatively. Fields such as education, arts, social sciences, humanities, and health sciences are notorious for offering unpaid internships. Most people are okay with acquiring an internship that does not pay and focus on acquiring experience instead. However, such trends hamper the ability of people to negotiate their salaries in the future. For instance, those that endure unpaid internship when they obtain employment get starting salaries that are low compared to those that obtain paid internships. In some instances, those with no internship experiences are likely to be paid equal or even more salaries that those that endured an unpaid internship.

It is notable that the economy may not be very favourable to many organizations and that students may not feel bothered as long as they are getting valuable work experience. However, working as an intern without a salary has a negative impact on the economy since it promotes unpaid labour practices. It is estimated that most paid internships translate into full-time jobs while only 1% of the unpaid internships translate into jobs. As such, taking up unpaid internships reduce the chances of a person to be employed in an organization on a full-time basis. In the long run, these internships prove harmful for a candidate’s career. Employers are willing to let the unpaid interns go with the knowledge that their organizations will be able to acquire other unpaid interns. Unpaid internship ends up benefitting the employers since they are able to reduce their wages and thereby maximizing their profits to the detriment of the interns that have to embark on a tedious job search. Unpaid internship can be understood to mean employers that are determined to get something for nothing.

However, on the flip side there are also some advantages of an unpaid internship. The knowledge and skills required to be successful in the workplace is something that can’t be taught in any college or university. The college has its own pace, and schedules vary, but most professional environments have more rigid schedules, dress codes, and other policies. An unpaid internship gives a candidate the opportunity to experience the job environment first hand without the risks associated with a normal job. The top strategy when searching for a new job is also the connections made during the internship which prove to be handy later on in one’s career. Establishing strong connections during an internship can result in quality job leads when you graduate and valuable references when you apply for full-time positions. These connections made and the mentoring received can’t be found in a college classroom.

It’s better to find that a preferred career might not be a good fit for you during an internship than during your first months on a job after graduation. An entire professional field cannot be assessed from one internship, but if it doesn’t feel like a good fit, you have time to figure out why. Thus, giving candidates an opportunity to change their mind about their future prospects.

Working in an office with other professionals and clients helps in developing strong skills including communication, creativity and teamwork. Learning how to work with others who are difficult to work takes hand-on experience, as does learning how to avoid being the one who is difficult to work with thus helping in them be better dealers of conflict. The development of these interpersonal skills helps in the overall holistic development of an individual and make them a better person and prepares them to face the challenges that lie in the future.

Many organizations have adopted the unpaid internship approaches especially after the previous economic recession. Employers should not adopt a general attitude. it is important for the employees to understand that appreciation in terms of wages abet small can generate additional interest among the interns and increase productivity. These internships also offer a tremendous amount of development and experience to the candidates, and is that payment enough?

Is Java becoming obsolete ?

Java has been around for a long time, and over the past few years it has undergone significant changes to keep it relevant. It is still the most popular programming language among developers, according to a 2019 report. But it may not hold onto that title forever. While it’s still at the top of lists of programming languages, other languages, like Python, are closing in on it. In fact, it is predicted that in the next few years, Python will surpass Java (and C) as the most popular programming language. And Python is not Java’s only competition. The rise of Kotlin has also taken some of Java’s share away, especially once Google started supporting Kotlin for Android development.

Java happened to be in the right place at the right time, similar to how Python now happens to be in the right place at the right time with the explosion of interest in AI and machine learning. According to Rich Sharples, senior director of product management at Red Hat, when Java was created 25 years ago, it was designed with the network in mind, and had a lot of features for network communications. It also came about around the time that multiprocessor systems were gaining traction and it was one of the first languages to make use of those hardware advances without the developer having to do too much more work. In addition to those features, it also had big backing in the industry, from Sun Microsystems, IBM and Oracle. All of these factors combined to make Java a highly successful language and a top choice for many developers for decades.

The reason that it has stayed so popular is because it still meets the needs of working across networks. And of course, there’s also the fact that it is open source. “Pretty much anything interesting happening in tech is happening around open source. And it was relatively early in mainstream open source as well. So, it’s checked all the boxes from a technical point of view,” he said.

Java was well suited to the environments of its time, but hasn’t really aged well as technologies change. Sharples explained that Java does a lot of “cool dynamic stuff,” but those sorts of capabilities aren’t really needed in technologies like microservices and serverless. According to Sharples, when working in those types of environments, developers tend to just start fresh when they run into an issue. So, all of those dynamic capabilities Java has aren’t really needed anymore. “What you get is a lot of baggage that doesn’t really provide much value in those modern architectures…If you think back, Java was designed to run on big multi-process machines. You could pretty much guarantee that you owned the machine and you could run multiple applications for each JVM (Java Virtual Machine) or app server. That’s just not the world we live in today. So, a lot of those capabilities bring a lot of weight and complexity and offer little value. So, if you look at functions as a service, you don’t see Java mentioned an awful lot.”

But Sharples doesn’t believe Java is going away quite yet. He believes Java will likely still experience growth for many years, or even decades. There are many projects that let Java thrive in today’s environments, such as Oracle’s GraalVM, which allows for interoperability in shared runtimes, and Red Hat’s Quarkus, which is a Kubernetes-native Java framework.

Mike Milinkovich, executive director of the Eclipse Foundation, which oversees Java Enterprise Edition, also believes Java itself is going to evolve to support these technologies. “I think that there are going to be changes to Java that go from the JVM all the way up,” said Milinkovich.

Sharples also believes Oracle has done a pretty good job of “keeping the innovation going without breaking the stability.” Oracle currently has several active projects focused on facilitating innovation for Java:

•Valhalla, focused on introducing value types to Java

•Panama, which is about updating the form function for Java

•Loom, which focuses on scaling Java

•Amber, which is focused on finding ways to simplify the language

•Metropolis, which is trying to see how much of the JVM can be written in Java so that both JVM and Java can evolve faster

“Languages are hard to change, so Java will continue to lead. It will be interesting to see if other languages begin to use the JVM. Not just JVM dialects like Scala and Kotlin but other languages with their own user bases, like Ruby, JavaScript, or Python,” said Mark Little, VP Middleware Engineering at Red Hat.

Childhood Obesity

In 2019, an estimated 38.2 million children under the age of 5 years were overweight or obese. Once considered a high-income country problem, child obesity is now on the rise in low- and middle-income countries, particularly in urban settings. Childhood obesity is a complex health issue. It occurs when a child is well above the normal or healthy weight for his or her age and height. The causes of excess weight gain in young people are similar to those in adults, including behaviour and genetics. The rate of childhood obesity has more than tripled over the last four decades—rising from 5 percent in 1978 to 18.5 percent in 2016. But what are the reasons for this rapid increase?

Fast food Consumption

Increased fast food consumption has been linked with obesity in the recent years. Many families, opt for these places as they are often favoured by their children and are both convenient and inexpensive. Foods served at fast food restaurants tend to contain a high number of calories with low nutritional values. Though many studies have shown weight gain with regular consumption of fast food, it is difficult to establish a causal relationship between fast food and obesity.

Sugary beverages

A study examining children aged 9–14 from 1996–1998, found that consumption of sugary beverages increased BMI by small amounts over the years. Sugary drinks are another factor that has been examined as a potential contributing factor to obesity. Sugary drinks are often thought of as being limited to soda, but juice and other sweetened beverages fall into this category. Sugary drinks are less filling than food and can be consumed quicker, which results in a higher caloric intake.

Activity level

One of the factors that is most significantly linked to obesity is a sedentary lifestyle. Each additional hour of television per day increased the prevalence of obesity by 2%. Television viewing among young children and adolescents has increased dramatically in recent years. The increased amount of time spent in sedentary behaviours has decreased the amount of time spent in physical activity. Research which indicates the number of hours children spend watching TV correlates with their consumption of the most advertised goods, including sweetened cereals, sweets, beverages, and snacks. Media effects have been found for adolescent aggression and smoking and formation of unrealistic body ideals. Regulation of marketing for unhealthy foods is recommended, as is media advocacy to promote healthy eating.

Psychological factors:

Self-esteem

Research findings comparing overweight/obese children with normal-weight children in regards to self-esteem have been mixed. Some studies have found that obese children have lower self-esteem while others do not. There is some consensus in the literature that the global approach to self-esteem measurement with children who are overweight/obese is misleading as the physical and social domains of self-esteem seem to be where these children are most vulnerable.

Eating disorder symptoms

Traits associated with eating disorders appear to be common in adolescent obese populations. A number of studies have shown higher prevalence of eating-related pathology (i.e. Anorexia, Bulimia Nervosa, and impulse regulation) in obese children/youth.

Consequences

Childhood obesity can profoundly affect children’s physical health, social, and emotional well-being, and self-esteem. It is also associated with poor academic performance and a lower quality of life experienced by the child. It has also been linked to numerous medical conditions. These conditions include, but are not limited to, fatty liver disease, Type 2 diabetes, asthma, cardiovascular disease, high cholesterol, glucose intolerance and insulin resistance, skin conditions, menstrual abnormalities and impaired balance. Until recently, many of the above health conditions had only been found in adults; now they are extremely prevalent in obese children. Childhood obesity has also been found to negatively affect school performance. A research study concluded that overweight and obese children were four times more likely to report having problems at school than their peers. They are also more likely to miss school more frequently, especially those with chronic health conditions such as diabetes and asthma, which can also affect academic performance.

The growing issue of childhood obesity can be slowed, if one focuses on the causes. There are many components that play into childhood obesity, some being more crucial than others. A combined diet and physical activity intervention conducted in the community with a school component is more effective at preventing obesity or overweight. Moreover, if parents enforce a healthier lifestyle at home, many obesity problems could be avoided. What children learn at home about eating healthy, exercising and making the right nutritional choices will eventually spill over into other aspects of their life. This will have the biggest influence on the choices kids make when selecting foods to consume and choosing to be active. Focusing on these causes may, over time, decrease childhood obesity and lead to a healthier society as a whole.

The History of Money

According to various individuals: Money is said to be the root of all evil. Yet it is money which controls the world. But what exactly is money? Money is a term that refers to two concepts: the abstract unit of account in terms of which the value of goods, services, and obligations can be compared; and anything that is widely established as a means of payment. Frequently the standard of value also serves as a medium of exchange, but that is not always the case. Nowadays we have digital currencies such as Bitcoin alongside our modern-day dollars and pounds. But how exactly did we reach to these currencies in the 21st century?

The Beginning: The Barter System

Barter is the exchange of resources or services for mutual advantage. Tribes in Mesopotamia were likely the starting point of the bartering system back in 6000 BC. Phoenicians (in the eastern Mediterranean; a part of modern-day Lebanon) saw the process, and they adopted it in their society. The barter system was frequently used by ancient people to get the food, weapons, and spices they needed. Because of salt’s great value, Roman soldiers bartered their services for the empire in exchange for it. In Colonial America, the colonists used bartering to get the goods and services they needed. Today, individuals, organizations, and governments still use, and often prefer, barter as a form of exchange of goods and services.

9000 – 6000 B.C.: Cattle and Grains

Cattle, which includes not only cows but also sheep, camels, and other livestock, are the first and oldest form of money. The livestock was also frequently bartered in exchange for various commodities. With the advent of agriculture also came the use of grains and other plant products as a standard form of barter in many cultures

1200 B.C.: Cowrie Shells

The first use of cowries, the shells of a mollusc available in the shallow waters of the Pacific and Indian Oceans, was in China. Historically, many societies have used cowries as money, and even as recently as the middle of this century, cowries have been used in some parts of Africa. The cowrie is the most widely and longest used currency in history.

1000 B.C.: First Metal Money

Bronze and Copper cowrie imitations were manufactured by China at the end of the Stone Age and are considered some of the earliest forms of metal coins. Metal tool money, such as knife and spade monies, was also first used in China. These early metal monies developed into primitive versions of round coins. These coins were made out of base metals, often containing holes so they could be put together like a chain.

500 B.C.: Coins

Outside China, the first coins developed out of lumps of silver. They soon took the familiar round form and were stamped with various Gods and emperors to mark their authenticity. These coins first appeared in Lydia (a part of present-day Turkey) but the techniques were quickly copied and further refined by the Greek, Persian and later the Roman empires. Unlike Chinese coins these new coins were made from precious metals such as silver, bronze, and gold, which had more inherent value.

118 B.C.: Leather Money

Leather money was used in China in the form of small pieces of white deerskin with colourful borders. This could be considered the first documented type of banknote.

806: Paper Currency

The first known paper banknotes appeared in China. China experienced over 500 years of early paper money, spanning from the ninth through the fifteenth century. Over this period, paper notes grew in production to the point that their value rapidly depreciated and inflation soared. Then beginning in 1455, the use of paper money in China disappeared for several hundred years. This was still many years before paper currency would reappear in Europe, and three centuries before it was considered common.

1816: The Gold Standard

Gold was officially made the standard of value in England in 1816. At this time, guidelines were made to allow for a non-inflationary production of standard banknotes which represented a certain amount of gold. Banknotes had been used in England and Europe for several hundred years before this time, but their worth had never been tied directly to gold. In the United States, the Gold Standard Act was officially enacted in 1900, which helped lead to the establishment of a central bank.

1930: End of the Gold Standard

The massive Depression of the 1930s, felt worldwide, marked the beginning of the end of the gold standard. In the United States, the gold standard was revised and the price of gold was devalued. This was the first step in ending the relationship altogether. The British and international gold standards soon ended as well, and the complexities of international monetary regulation began.

The Present:

Modern Day money is now longer restricted to simply coins or banknotes but has also advanced to the virtual world with new digital currencies outside the jurisdiction of governments such as cryptocurrency and various electronic wallets which are done through a portable electronic device, such as a smartphone, or a tablet now called mobile payment.

The evolution of money has truly seen various forms including physical ones such as livestock and coins made out of precious metals and has now even included virtual ones which now fit in the palm of your hand. Money makes the world round and its unknown what its new form will be.

The Greatest Threat to Wildlife

In Africa the poaching of animals such as tigers or elephants for their skins or tusks has been a problem well known throughout the world. But the impact of hunting for their meat may pose a greater threat, such a trade is known as bushmeat trade. It refers to the non-traditional hunting of non-game animals for meat. Wild chimpanzees and other forest animals are systematically hunted and sold as meat through markets across Africa and cities across the world. What once was a form of subsistence hunting in rural villages, has now evolved into a commercial trade that has grown in scale over recent decades.

While Bushmeat has been practiced since the late 1800s, the scale of hunting is far greater today and has been increasing, facilitated by road building in the forest for logging and mining operations and fuelled by growing demand in urban markets, where comparatively well-off customers consider wild-sourced protein a delicacy and a status symbol. A smaller international market for exotic meat thrives in Europe and the United States.

THE ENVIRONMENTAL IMPACTS OF BUSHMEAT TRADING

  • Environmental Imbalance

 Poachers and hunters involved in the illegal bushmeat market mainly use snares to trap these beasts and often larger animals like jackals, lions, cheetahs, wild dogs get caught in these traps. These carnivores the primary one being lions are the ones most affected due to the trading of bushmeat in two ways; by dramatically reducing the populations of animals that are food sources for lions such as antelopes and other small animals (pigs and boars) and by directly killing these animals who inadvertently are caught in the wire snares that are set to illegally harvest other species. The removal of any animal from the food chain causes an imbalance for both the species as well as other species dependant on it for food.

  • Endangering of animals

There are roughly 301 mammal species threatened by hunting for bushmeat including 126 primates, 65 even-toed ungulates, 27 bats, 26 diprotodont marsupials, 21 rodents, 12 carnivores and all Pangolin species. On Bioko Island, off the coast of Equatorial Guinea, for example, hunting for bushmeat has decimated populations of the island’s seven endemic monkey species, which are all endangered. Another prime example is the elephant which have been hunted for their tusks are also for their meat. It has been done to the extent that the bushmeat trade is estimated to be worth higher than the ivory industry. While the ivory obtained from tusks may be sold for around $180 (in 2007), a poacher could sell the meat (approximately 1,000 pounds) for up to $6,000 this may be primarily due to the high demand and the fact that the elephant’s meat is considered prestigious and hence sold at higher costs. The elephant’s population has dropped by 62% in the recent decade and the situation has not improved since with population going from 1.34 million in 1976 to barely 415,00 elephants in 2018

The Impact of Bushmeat on Humans

Animal sources may have been the cause for infectious diseases such as tuberculosis, leprosy, cholera, smallpox, measles, influenza, and syphilis acquired by early agrarians. The emergence of HIV-1, AIDS, Ebola virus disease, and Creutzfeldt-Jakob disease are attributed to animal sources today. Thomas’s rope squirrel and red-legged sun squirrel were identified as reservoirs of the monkeypox virus in the Democratic Republic of the Congo in the 1980s. Outbreaks of the Ebola virus in the Congo Basin and in Gabon in the 1990s have been associated with the butchering and consumption of chimpanzees and bonobos. The risk of bloodborne diseases to be transmitted is higher when butchering a carcass than when transporting, cooking and eating it. Many hunters and traders are not aware of zoonosis and the risks of disease transmissions. An interview survey in rural communities in Nigeria revealed that 55% of the respondents knew of zoonoses, but their education and cultural traditions are important drivers for hunting and eating bushmeat despite the risks involved.

Wild meat provides a primary food source for many millions of people throughout the developing world, especially where other food options are not readily available. Unsustainable hunting has now metamorphosed into a global hunting crisis taking the form of a serious threat to the food security of many people as well as the immediate survival of hundreds of mammal species, other wildlife and altered ecological cascades rippling through ecosystems. Averting this crisis requires bold and prompt actions. Approaches that benefit both local people and wildlife will be required to avoid a future of hungry desperate people inhabiting ‘empty landscapes’ across much of the planet Earth.

Why is healthcare in the U.S. so expensive ?

These days, all it takes is one surprise medical bill to send a patient into bankruptcy. The United States’ health care system operates differently from many others in the world with high costs for the individual as a distinguishing characteristic. In fact, the higher prices mean the U.S. spends more on health care than other “developed countries,”. According to a February 2020 survey, almost one in three Americans worries about affording health care. So, what exactly makes health care in the U.S. so expensive?

The most important reason is that U.S. health care is based on a “for-profit insurance system,” one of the only ones in the world, according to Carmen Balber, executive director of Consumer Watchdog, who’s advocated for reform in the health-insurance market. In the U.S, most health insurance is administered by private companies and individuals must pay for it themselves, even if their employer subsidizes some of it. The underlying motive to make money has a ripple effect that increases prices.

Similarly, Dr. Georges Benjamin, executive director of the American Public Health Association, pointed to a lack of universal health care, where everyone is guaranteed access without undergoing financial hardship, as a primary reason for high costs.”Part of our system is that everybody is … paying for somebody else’s underpayment, whether they like it or not,” he said. “Everybody is trying to figure out who else can pay for it instead of them.”

Pay per service

U.S. health care exists in a system where patients are charged based on the services they receive. In many parts of the healthcare ecosystem, people are paid for volume, and so that fuels an orientation toward, ‘Might as well get an extra scan.’ It’s in the economic interest of the hospital, the physician, the health care system when they’re being paid fee-for service, and the justification is that more is better.

As a result, there’s lower use of primary care, because the fee-for-service model “encourages overutilization.” Instead of taking people in a room, examining them, taking the history and spending the time talking to patients, doctors are quick to jump to getting a CAT scan or a diagnostic test when a history and physical exam would tell the answer. The fee-for-service creates an incentive to provide more procedures, instead of helping patients get healthier so that the nation as a whole needs fewer procedures.

Lack of government regulation

The companies that provide and charge for health care, like hospital systems and drug makers, have more power to keep costs high when they’re negotiating with multiple potential payers, like various private insurance companies. But when they must negotiate with a single payer, like the federal government, there’s more pressure to meet the demand in order to sell their services.

For example, a study found that private insurance companies paid almost two and a half times what Medicare would’ve paid for the same medical service at the same facility.

To make matters costlier, the U.S. government doesn’t regulate what most companies in the health care space can charge for their services, whether it’s insurance, drugs or care itself.

Consolidation of insurance and hospital systems

While the U.S. healthcare system itself may be fragmented, in many parts of the country, there’s only one or two companies providing health insurance or medical care. This means that, again, there’s little to no incentive for them to lower costs since patients don’t have much of a choice.

What’s more, health care providers are paid, on average, much more in the U.S. than in other countries. “Despite the enormous cost that we have in America for health care, we don’t get the same value of our health care dollar as other nations do,” Benjamin added. “If you get sick, this is the place to be, no doubt about that, but … we don’t have a system with everybody in and nobody out.”

Why the rise in divorce ?

Between 1970 and 2008, Cheng-Tong Lir Wang and Evan Schofer, two sociologists from the University of California discovered that the global divorce rate rose from 2.6 divorces for every 1,000 married people to 5.5 – the rate had more than doubled. Meanwhile, in 2017 India’s divorce rate — stood at 1%, according to a report from the Organisation for Economic Co-operation and Development. While the absolute number of divorces has gone up from 1 in 1,000 to 13 in 1,000 over the last decade or so, India still remains at the top of the list of countries with the lowest divorce rates. But why a sudden increase in divorce rates for a country where marriage holds the highest status and divorce is still a taboo?

Divorces are riddled with stigma in India with divorced women being looked down upon in society. One of the main reasons for the rise in divorce rates is women finally taking a stand against the injustice done to them for generations. Today, three billion women and girls live in a country where rape within marriage is not explicitly criminalized. But injustice and violations take other forms as well. In one out of five countries girls do not have the same inheritance rights as boys, while in others (a total of 19 countries) women are required by law to obey their husbands. Around one third of married women in developing countries report having little or no say over their own healthcare. A divorce gives them the opportunity to be more in control of their lives and not rely on anyone else.

Cheating and affairs are also major contributors to divorce in India. This issue has grown with the growth of a more digitalised world, with apps providing the ability to contact people at a ‘tap’ of a screen. Many Indian women in marriages are even aware of their husbands having affairs and ‘turn a blind-eye’ due to their age or years in the marriage. But it does not mean the dynamics of the marriage are happy anymore. Having an affair is the one thing that is destructive to a marriage, once found out. It destroys trust, love and care but for many, the marriage will carry on due to family and society pressures.

Indian marriages are influenced, supported and inspired by family. But also, sadly, marriages are destroyed by family too. Especially, extended family. The most common marital issue is that of the in-laws and the daughter-in-law. Breakdown of Indian marriages where the daughter-in-law is not good enough for the in-laws is one of the biggest reasons for divorce in India. From issues like ‘not enough dowry’ to ‘not being part of the family’ to ‘stealing the son from the family’ are all typical examples of the cause.

Divorce has led to the death of marriages which gives it a reputation of it being a negative word. But women are challenging that perception now and pushing for a change. Through stand-up comedies, spoken word poetry, Instagram accounts and support groups, they are fighting the stigma around divorce, one act, one verse, one post at a time. The end of a marriage could mean the beginning of a happy life and not necessarily the end of life itself.

Over-watch or Over-worked ?

In October 1958, American Physicist Willy Higinbotham creates a tennis game called “Tennis for Two” on an oscilloscope and analog computer for public demonstration at Brookhaven National Laboratory. This was the world’s first video game and was a major inspiration behind the 1972 arcade game legend: Pong. With Pong came a boom of other successful video games including Pac-Man (1980) and Donkey Kong (1982) and a new industry was born.

In recent years besides consoles and arcades with the emergence of social networks, smartphones and tablets introduced new categories such as mobile and social games into the picture and now in 2021 the value of the video game industry in the United States was estimated to be $65.49 billion. But as tens of thousands of video game fans and creators shell out their dollars, a difficult truth about the gaming industry is beginning to emerge: what’s seen by outsiders as a fun, creative business is becoming psychologically and financially unbearable for those working in it.

“Every game you like is built on the backs of workers,” says Nathan Ortega, who thought he found his dream job when Telltale Games offered him a position as a community and video manager in 2015. Ortega was a Telltale enthusiast so it was an easy decision to pack up his stuff and relocate near the company’s headquarters in California. But he was soon so stressed out by work that he developed an ulcer and started coughing up blood. The dedication that goes into masterpieces of gaming is admirable. Whether it be designing, coding, producing, or even testing a game, it is clear that passion is abundant from people working “behind the screen”. While this euphoric hype is indeed an aspect of the gaming industry, more often than not the wave that pushes these passion-filled developers forward is harsh and ruinous, leaving nothing but a husk of what once was a spirited creator. Video game makers call it “crunch” – the process of working nights and weekends to hit a tight deadline. But unlike other professions that might muster employees to work overtime in the final stretches of a project, in game development it can be a permanent, and debilitating, way of life. In October 2020, Polish game developer CD Projekt Red, asked all of its employees to work six-day weeks in the lead-up to the November release of Cyberpunk 2077, one of the most anticipated games of the year. But the new policy was just the formalization of an informal code that has long existed at the studio. Various departments had already been working nights and weekends for weeks or months straight in order to meet deadlines, according to people who have worked there. Studio head Adam Badowski said he was aware that many employees had been testing their limits to bring the game to launch, efforts for which he was “immeasurably thankful!”

The “crunch” is a situation that has existed in the gaming industry for decades. Many other game developers have also cultivated reputations for running flat out. As the industry prepares for another big holiday season, workers are putting in long hours to finish their games in time. Few employees would object to putting in the occasional night or weekend, but crunch is a culture, an atmosphere, a state of mind. Countless horror stories have come out from ex-developers who know crunch is bad news. After a 70, 80, or even 90-hour work week, spending quality time with family becomes more of a challenge than a relief. An offset of these overworked developers not being able to take basic care of themselves is an inclination to abandon the industry.

Amid this turbulence, dozens of workers in the gaming business are calling for the industry to unionize. The turmoil presents them with both an opportunity and a challenge. On one hand, the instability can make it difficult to talk about unionization. Still, a recent survey conducted by the industry group International Game Developers Association found that 47% of workers said they would support a union at their company, while 26 % said they “maybe would.”

Video games are supposed to be an outlet to relieve stress and spark imagination and creativity, but they are instead being exploited by companies to squeeze money out of their employees. If crunch isn’t solved, this industry is doomed to failure.

India at the Olympics

The first modern Olympics were held in Athens, Greece, in 1896 and it took India only four years before seeing their first representation at the Summer Olympic Games. It all started for India in 1900 when they sent lone athlete Norman Pritchard to Paris where he won two medals in Men’s 200 meters and Men’s 200-meter hurdles. India has participated in every Summer Games since then, sending their first Olympic team in 1920 comprising four athletes and two wrestlers. It was, however, not till 1928 when they saw their next medal and thus began the domination of the Indian hockey team. The pre-Independence Indian hockey team dominated the Olympics from 1928 to 1936 winning three titles. In the 1928 Amsterdam Olympics, India beat Austria, Belgium, Denmark and Switzerland before defeating Netherlands in the final to claim their first ever gold. In 1932 Summer, Olympics, India defeated USA 24 – 1, the largest margin of victory in Olympic history. In 1936 Olympics final, they defeated Germany 8 – 1, the largest margin of victory ever in an Olympic final.

 From 1948, an independent India began sending delegations of more than 50 athletes, selected by various sports federations. The delegation was headed by a chef-de-mission. The Indian field hockey team won gold medal at the 1948 Summer Olympics by defeating Great Britain in the final. It was the first gold medal for India as an independent nation. They continued their dominance by winning the sixth straight title by defeating Pakistan in the final in 1956 Summer Olympics.  Also winning in 1964 and 1980, the Indian Hockey team has recorded eight titles in field hockey thus leading the leader board.

Originally scheduled to take place from 24 July to 9 August 2020, the Tokyo 2020 Summer Olympics was postponed in March 2020 as a result of the COVID-19 pandemic, and is held largely behind closed doors with no public spectators permitted under the state of emergency. Despite being rescheduled for 2021, the event retains the name Tokyo 2020 for marketing and branding purposes. This is the first time that the Olympic Games have been postponed and rescheduled, rather than cancelled.

On Day 1 of the Games Weightlifter Mirabai Chanu gave India its first medal at Tokyo 2020 as she won the silver medal in the women’s 49kg category as China’s Zhihu Hou took gold. China’s Zhihu lifted 94kg to create an Olympic record while a new Olympic record was registered by Mirabai Chanu with a successful lift of 115kg in clean and jerk. The Indian women’s hockey team began their campaign on a poor note, losing 1-5 to Netherlands. In table tennis singles event, Manika Batra won her first-round match against Tin-Tin Ho of Great Britain in straight games. In tennis, Sumit Nagal won his first-round singles match against 2018 Asian Games gold medallist Denis Istomin of Uzbekistan. Indian badminton men’s doubles pair of Satwiksairaj Rankireddy and Chirag Shetty overcame a difficult challenge from the Chinese Taipei’s duo of Yang Lee and Chi-Lin Wang. In men’s hockey, India beat New Zealand 3-2 in the Pool A match as they recovered from a bad start early in the match.

As of July 24th 2021, India has won 9 Gold medals, 7 Silver medals and 12 Bronze medals; giving the country a totality of 28 medals. With the expectations set high after the win of the silver medals; it looks like India will finally go back to show their dominance at the Olympics as the hockey team once did.

Covid-19 : a travel and tourism wrecker

In the past decades, tourism has experienced continued growth and became one of the fastest growing economic sectors globally. The sector witnessed a 59% growth over the decade in international tourists’ arrivals from 1.5 billion 2019 compared to 880 million in 2009. Globally, the industry contributed to $8.9 trillion to the global GDP in 2019 equaling a contribution of 10.3%. However, the strong growth has been halted in 2020 amid the Covid-19 pandemic. With airplanes on the ground, hotels closed and travel restrictions implemented, travel and tourism became one of the most affected sectors since the very start of the virus spread. The pandemic has cut international tourist arrivals in the first quarter of 2020 to a fraction of what they were a year ago.

Closing borders, tourism & travel ban

Countries all over the world applied travel restrictions to limit the coronavirus spread. Airport closures, the suspension of flights, and nationwide lockdowns are just some of the measures that countries are implementing in an effort to contain the pandemic. After the spread of the pandemic in the first two quarters of 2020, at least 93% of the global population lived in countries with coronavirus-related travel restrictions, with approximately 3 billion people residing in countries enforcing complete border closures to foreigners.

The decline of International Tourists during the Pandemic

The number of international tourist arrivals has been growing remarkably in the last decade and still sustained growth throughout the last years; in 2017 arrivals reached a total of 1.3 billion globally, 2018 reaching 1.4 billion and 1.5 billion in 2019. In 2020, due to the severe impact of the COVID-19 Pandemic, international tourism went down by 65% in the first half of 2020 when compared with 2019 figures.In May 2020, the majority of the UNWTO (World Tourism Organization) tourism experts expect to see signs of recovery by the final quarter of 2020 but mostly in 2021.

Covid-19 and Airline Failures

The International Air Transport Association (IATA) financial outlook released in June showed that airlines globally are expected to lose $84.3 billion in the year of 2020 for a net profit margin of -20.1%. It also stated that revenues will fall by 50% to $419 billion from $838 billion in 2019. In 2021, losses are expected to be cut to $15.8 billion as revenues rise to $598 billion. IATA’s Director General and CEO, stated that “Financially, 2020 will go down as the worst year in the history of aviation. On average, every day of this year will add $230 million to industry losses. In total that’s a loss of $84.3 billion”. What’s shocking is witnessing how many airlines have failed during the coronavirus pandemic. And even for airlines that are still in business, the situation is severely difficult. The second-largest carrier in South America, Avianca Holdings survived the Great Depression – but not coronavirus. The airline filed for Chapter 11 bankruptcy protection in May.

Hospitality Sector Hit by the Lockdown

The lockdown due to the pandemic has affected the tourism industry across the globe, and the hotel sector is among the hardest hit. Global hospitality data company STR compared 2020’s first quarter status to 2019 figures, hotel occupancy rates dropped as much as 96% in Italy, 68% in China, 67% in UK, 59% in US. There’s no doubt that the hotel industry has witnessed a severe impact by the pandemic and the lockdown status.

Balancing the Return of Tourism Revenues and Safety

As of July 2020, the EU opened borders to tourists from 15 different countries leaving the U.S. off the list. Health officials developed a plan to classify accepted countries based on how the country is performing in controlling the coronavirus. A country is considered under control when they have a number close to or below the EU average for new coronavirus cases over the last 14 days and per 100,000 inhabitants. On 15 June, the European Commission launched ‘Re-open EU’, a web platform that contains essential information allowing a safe relaunch of free movement and tourism across Europe. The platform will provide real-time information on borders, available means of transport, travel restrictions, public health, and safety measures.

The Return of Tourism Globally

With lockdowns ending around the world, many countries have started to ease border restrictions and reopen for international tourists. Although many governments are still advising against “nonessential” international travel, a host of popular destinations have eased their Covid-19 border restrictions and are readily welcoming tourists back:

– The European Commission has released guidelines for how its Member States can start to ease coronavirus travel restrictions and enable tourism to begin again

– Destinations like Dubai, the Maldives, Egypt, Lebanon, Croatia, Kenya, Tanzania and Jamaica have already opened their doors to foreign visitors again, while Thailand hope to reopen soon

While tourism is slowly returning in some destinations, most members of the UNWTO Panel of Tourism Experts expect international tourism to recover only by the second half of 2021, followed by those who expect a rebound in the first part of next year.

However, there are still concerns over the lack of reliable information and deteriorating economic environment which are indicated as factors weighing on consumer confidence, especially with the potential new limits on travel as world comes to grips with second Covid-19 wave. The concerns over the waves of coronavirus brought on by returning vacationers are wreaking havoc on the world’s tourism industry.