Marked Reduction in Aggregate Technical and Commercial losses of DISCOMs in FY22

 Aggregate Technical and Commercial Loss (AT&C Loss) and ACS-ARR Gap are key indicators of DISCOM performance. In the last 2 years, the AT&C loss of the DISCOMs of the country was hovering at 21-22%.  Ministry of Power instituted a number of measures to improve the performance of utilities.  Preliminary analysis of data for FY2022 of 56 DISCOMs contributing to more than 96% of input energy, indicates that the AT&C losses of DISCOMs have declined significantly to ~17% in FY2022 from ~22% in FY2021.

Reduction in AT&C losses improves the finances of the utilities, which will enable them to better maintain the system and buy power as per requirements; benefitting the consumers.  The reduction in AT&C losses has resulted in reduction in the Gap between Average Cost of Supply (ACS) and Average Realizable Revenue (ARR).  The ACS-ARR Gap (on subsidy received basis, excluding Regulatory Income & UDAY Grant) has declined from Rs. 0.69/kWh in FY2021 to Rs. 0.22/kWh in FY2022.

The decline of 5% in AT&C losses and 47 paise in the ACS-ARR Gap in one year is the result of a number of initiatives taken by the Ministry of Power. On 04th September 2021, the Ministry of Power revised the prudential norms of PFC and REC, the lending agencies for the power sector to provide that loss making DISCOMs will not be able to avail financing from PFC and REC until and unless they draw up an action plan for reducing the losses within a specific timeframe and get their State Government’s commitment to it.  The Ministry of Power also decided that any future assistance under any scheme for strengthening of the distribution system by the DISCOMs will be available to a DISCOM which is making losses only if it undertakes to bring its AT&C losses / ACS-ARR Gap down to specified levels within a specific timeframe and gets their State Government’s commitment to it.  The Revamped Distribution Sector scheme lays down that funding under the scheme will be available only if the DISCOM commits to an agreed loss reduction trajectory. The Ministry of Power made a series of presentations before the 15th Finance Commission as a result of which 15th Finance Commission provided for an additional borrowing window to States contingent on their taking steps to reduce to their DISCOMs losses.  The Ministry of Power issued Regulations on 07th October 2021 providing for mandatory energy accounting and energy auditing for all DISCOMs.  On 03rd June 2022, the Ministry of Power issued Late Payment Surcharge Rules which provide that unless the Distribution companies promptly pay for the power drawn from the ISTS, their access to the power exchange will be cut off.  While putting all these in place; the Ministry of Power also worked with the distribution companies to provide the necessary finances under the RDSS for undertaking the loss reduction measures.

The above improvement is a result of the concerted efforts of the Ministry of Power, the State Governments as well as Distribution companies to implement the reforms and adoption of best practices. As a result – the viability of the power system has improved. This was necessary because the demand for power has been growing and further investments will be necessary for the power sector to expand to meet the growing demand; and the investments will only come if the power sector remains viable.

***

Covid effected the Engineering students and ways to cope up once the college’s reopens

Since the start of 2020 the world has been witnessing a health calamity with uncertain implications.
The coronavirus pandemic or covid-19 has created havoc causing immense damage in terms of loss
of human lives, financial and economic shortfalls and affected student’s education.

We, the students, I feel, are among those whose lives took an unalterable turn. The school or
college going community had to suddenly shift in-class action to onscreen lectures and
demonstrations. The students understood that theory classes can be done from home. Students can
get notes and study materials online. But the Engineering students or students from technical
background are lacking behind as they are not attending the practical classes. Practical classes are of
utmost importance for any technical students to work further in any technical field after graduation.

So, when the colleges will reopen now, the students need to focus on some topics and need to know
where they need to focus now.

Students need to attend more laboratory/practical classes once the college reopens, as we know
now that theory can be done online and we can gain theoretical knowledge by attending different
online video lectures, PDF notes etc. available on the internet.

Students must participate in events or exhibitions, they should know what projects can be made, the
topics that are needed in order to have a strong grip on the subjects. One must take technical fests
seriously and make new projects. If one is not making them, at least he/she can spectate others’
work or the events.

Once the college reopens, start looking for industrial training, start looking for companies or
workshop jobs (odd/even jobs) which one needs to gain experience.
If students want to start their own company, they must start once the college reopens, especially
the final year students can start their own projects or can start-up with innovation and implement
their skills and potential.

There would always be an increasing demand for creative reasoning, design thinking and problem-
solving skills looking at the current recruitment trends.

In a nutshell, the whole engineering fraternity will have to let go of the traditional mindset and think
out of the box to find innovative solutions to the way forward.

As the dust settles down on these outbreaks, the new dawn would bring in new challenges of
survival and those who adapt and adopts faster will win the race.

Two types of AI

AI systems should usually show at least any of the following human intelligence-related behaviors: planning, thinking, thought, problem-solving, information interpretation, vision, motion and manipulation, and, to a lesser degree, social intelligence and imagination. AI is pervasive nowadays, used to decide what you’re going to purchase next online, to interpret what you’re thinking to virtual assistants like Amazon’s Alexa and Apple’s Siri, to know who and what’s in a video, to spot spam, or to identify credit card fraud.

At a top standard, artificial intelligence can be categorized into two specific types:

Narrow AI is what we see everywhere today in computers: smart devices that have been trained or studied how to execute certain tasks without being specifically programmed to do so. This form of artificial intelligence is apparent in the speech and language processing of the Siri Virtual Assistant on the Apple iPhone, in the vision-recognition systems on self-driving vehicles, in the recommendation engines that offer items that you would prefer based on what you’ve purchased in the past. Like humans, these programs can only study or be taught how to execute particular functions, which is why they are called restricted AI.

Implementations of Close AI are becoming more popular as deep learning is continuously incorporated into real society. For example, Narrow AI may be used for email spam filtering, music streaming services, and perhaps even autonomous vehicles. Nonetheless, there are questions about the extensive use of Narrow AI in critical network functions. Others claim that the features of Narrow AI make it unstable and that in situations where a neural network can be used to regulate large networks (e.g. power grid, financial trading) substitutes could be much more risk-averseGeneral AI: Current AI development started in the mid-1950s. The first wave of AI pioneers became persuaded that general artificial intelligence was feasible and should emerge in only a few decades. AI visionary Herbert A. Simon wrote in 1965, “Machines should be able to perform whatever job that a man can do within twenty years.”

Artificial General Intelligence (AGI) should be a computer capable of knowing the environment as well as any human person, with much the same ability to know how to execute a wide variety of activities. AGI does not exist, but has been used in science fiction tales for more than a century, and has been popularized in recent days by films such as 2001: A Space Odyssey. AGI’s cinematic depictions differ greatly, but they lean mostly towards the dystopian dream of autonomous robots eradicating or enslaving mankind, as shown in films such as The Matrix or The Terminator. In such stories, AGI is often cast as either indifferent to human suffering or bent upon the destruction of mankind.

Use this intellect to monitor robotics as dexterous and agile at least as an individual will result in a new generation of machines capable of executing any human activity. With time, such bits of intelligence will be able to take over any human role. Initially, humans may be cheaper than robots, or humans operating alongside AI could be more successful on their own than AI. Yet AGI’s arrival will render human labor redundant.

But one thing for sure we shouldn’t let general AI break its constraints and use it only for development, not for destruction.

Edge Computing

Edge computing is a networking concept that seeks to get computation as close to the database as possible to reduce latency and bandwidth utilization. Simply put, edge computing involves running fewer cloud operations and transferring those operations to local sites, such as on a user’s phone, an IoT system, or an edge server. Bringing computation to the edge of the network minimizes the amount of long-distance communication between a client and a server that must occur.

Imagine a safe house with dozens of high-definition IoT video cameras. Those are ‘dumb’ cameras that essentially emit a raw video signal and send it continuously to a cloud server. On the cloud platform, a motion-detection program takes the video output from all the cameras to guarantee that only clips of action are transferred to the file archive. It ensures there is a persistent and substantial burden on the building’s Internet connectivity, as a large amount of video content being transmitted absorbs considerable bandwidth. Besides, there is a very heavy load on the cloud storage which will concurrently process the video footage from all the cameras.

Consider now that the processing for the motion sensor is pushed to the edge of the network. What if each camera used its internal computer to run the application for motion detection, and then submitted footage as needed to the cloud server? That will result in a substantial decrease in the usage of bandwidth, as most of the video footage would never have to move to the cloud server. The cloud service will now only be responsible for maintaining the relevant video, ensuring the system could connect with a greater range of cameras without overloading. And it feels like edge computing.The cost reduction alone will be a catalyst for several businesses to implement an edge-computing architecture. Companies who adopted the cloud for many of their applications may have noticed that bandwidth costs were higher than anticipated.

However, the main advantage of edge computing is potentially the potential to process and store data more efficiently, making for more effective real-time applications that are vital to businesses. A smartphone scanning a person’s face for facial recognition will need to run the facial recognition algorithm via a cloud-based database before edge computing which will take a lot of time to process.

For an edge computing model, considering the capacity of smartphones, the algorithm may be operated locally on an edge server or gateway, or even on the smartphone itself. Applications like virtual and augmented reality smart cities and even construction-automation systems need fast processing.Worldwide, 5 G broadband systems are introduced by networks that offer the advantages of broad speed and reduced latency for devices, enabling businesses to switch from a garden hose to a firehose for their network bandwidth. Instead of merely providing quicker speeds and advising companies to start storing data in the cloud, several providers are focusing on cutting-edge computing approaches in their 5 G implementations to provide quicker real-time processing, especially for mobile devices, connected cars, and self-driving cars.

It’s obvious that while the original aim for edge computing was to minimize IoT system latency costs over long distances, the proliferation of real-time apps needing local processing and storage resources would push the technology forward in the coming years.

So I am concluding this article here. Hope you guys enjoyed this!

Document Object Model(DOM)

When it comes to design a site or a web page it plays a very important role, basically here we are talking about HTML DOM, with the help of this the javascript can interact with the HTML code and can find or change any element of HTML code. Let’s try to make it simpler, whenever any web page loads it creates a document object model, it has a tree-like structure and have nodes, every node has one parent and probably many children.

The Document Object Model (DOM) is the HTML and XML application development API. This determines the basic framework of the records and how the paper is obtained and exploited. The Document Object Model may be used in any programming language.

THE HTML DOM TREE OF OBJECTS

Java Script can change all the existing HTML elements and attributes of a page, all the CSS styles of the page and you can even add new ones. the HTML DOM is the standard of how to get, change, add, or delete HTML elements.

DOM and JavaScript

The DOM is not a programming language, but without it, the JavaScript language will have no concept or notion of web pages, HTML documents, XML documents, and their components ( e.g. elements). Each document element — the document as a whole, the heading, the column tables, the table headers, the text in the table cells — is part of the database object model for that column, so that they can all be accessed and controlled using the DOM and the JavaScript scripting language.

In the beginning, JavaScript and DOM were closely intertwined, but eventually, they evolved into separate entities. The output of the website is stored in the DOM and can be accessed and manipulated using JavaScript so that we can write this approximate equation:

API = DOM and JavaScript

The DOM was developed to be independent of any common programming language, allowing the conceptual representation of the text accessible from a single, coherent API. While we concentrate solely on JavaScript in this reference paper, DOM implementations can be designed for any language.

Accessing the DOM

You don’t have to do something different to continue using the DOM. Different browsers have different DOM implementations, and these implementations show varying degrees of compliance with the actual DOM standard (the subject we are trying to avoid in this documentation), but each web browser uses a document object model to make web pages accessible via JavaScript.

When you create a script–whether it’s inline in an <script> element or included in the web page through a script loading instruction–you can immediately begin using the API for the document or window elements to manipulate the document itself or to get at the children of that document, which are the various elements in the web page. As this is not a topic that can be covered at once.

So here I am concluding this. Hope you guys enjoy reading this!

Internet of Things(IoT)

The Internet of Things, or IoT, is a collection of interrelated computing systems, mechanical and digital computers, objects, animals, or individuals with unique identifiers (UIDs) and the ability to transmit data over a network without the need for human-to-human or human-to-computer interaction.

A device in the internet of things can be a human with a heart monitor implant, a farm animal with a biochip transponder, a vehicle that has built-in sensors that warn the driver when tire pressure is small or some other normal or man-made entity that can be given an Internet Protocol ( IP ) address that is capable of sending data over a network.

The IoT ecosystem consists of web-enabled smart devices that utilize embedded technologies, such as processors, sensors, and communication equipment, to capture, communicate, and respond to the data they obtain from their environments. IoT devices exchange sensor data obtained by linking to an IoT gateway or other edge node where data is either transmitted to the cloud to be processed or analyzed locally. Often, these devices interact with other similar devices and operate on the input they receive from each other. Devices do much of the work without human interference, but humans can communicate with the devices — for example, to set them up, send them directions, or access the data.

Now, a very important question of why we need IoT?

The Internet of Things allows people to live and function better and to have full influence on their lives. In addition to providing smart devices for home control, IoT is important for the enterprise. IoT offers businesses with a real-time glimpse into how their processes operate, offering visibility into everything from computer efficiency to supply chain and distribution activities.

IoT helps businesses to simplify operations and high labor costs. It also eliminates duplication and increases service quality, allowing it easier to produce and distribute products while ensuring consistency in consumer purchases.

Advantages of IoT

Some of the benefits of IoT are as follows:

  • Ability to view details from anywhere on any computer at any time;
  • Improved contact between linked electronic devices;
  • Pass data packets over the wired network saving time and money;
  • Automation of activities that can increase the efficiency of business services and the need for human interaction.

Disadvantages of IoT

  • If the amount of connected devices grows and more knowledge is exchanged between devices, the ability for hackers to access sensitive information is also growing.
  • Enterprises may potentially have to contend with large numbers — maybe even millions — of IoT devices, so gathering so handling data from all such devices would be difficult.
  • If there is a flaw in the network, it is possible that any linked computer would be compromised.
  • Because there is no universal IoT interface standard, it is challenging for products from various vendors to connect.

IoT benefits to organizations

The Internet of Things provides a range of opportunities to organizations. Many of the advantages are industry-specific, and others are common across various sectors. Some of the common benefits of IoT allow companies to:

  • Monitoring their total company processes;
  • Improving customer engagement (CX);
  • Save time and your money;
  • Enhance the efficiency of employees;
  • Integrate and change operating models;
  • Make sound company decisions;
  • Generate further money.

Here I’m concluding this. Hope you guys enjoy reading!

Robotics Process Automation

According to Chris Huff, chief strategy officer at Kofax- “RPA is software that automates rules-based actions performed on a computer.” It is an advanced technology where the machine records a specific task done by human and then perform the same task whenever required without any human intervention.

Every RPA system must include the three capabilities stated below:

  • Communicating with the other systems in either way screen scrapping or API integration.
  • Decision Making
  • Interface for bot programming.

One of the most amazing things about this is that it doesn’t need any prior coding knowledge, in fact neither this requires the development of code, nor it does require direct access to the code or database of any application. So, do not have to worry if you don’t know how to code or if in case you don’t like to code much, you can still learn this.

Robotic Process Automation (RPA) is the use of computer software ‘robots’ to perform routine, rule-based automated activities such as filling in the same information in various locations, downloading data, or copying and pasting.

RPA operates by collecting knowledge from current IT systems. There are several ways RPA software can work with the applications. One choice is to connect to databases and corporate network resources in the backend. Another is via front end or laptop interfaces, which have several types.

What is the safest way? It depends on the organization and the needs that the solution must tackle. With backend networking, automation can reach applications and resources under the power of a process automation server. This is most widely used for unattended automation, where the automated robots handle back-office functions such as reviewing insurance claims on a scale.

Types of RPA: 

  • Attended Automation: This requires human intervention while performing any task assigned.
  • Unattended Automation: This tool doesn’t require any human intervention while performing any task they are designed to have decision-making capabilities.
  • Hybrid RPA: This has the combined capabilities of both attended and unattended Automation.

Now, are RPA and any desktop application the same? And the answer is no, and the difference will be identified with their decision-making capability. Some general functions of RPA include

  • Opening different applications like emails, moving files, etc.
  • Integration with the existing tools.
  • Collecting data from different web portals.
  • Processing data which includes calculations, data extraction, etc.

Tools for RPA:

  • Blue Prism
  • Uipath
  • Automation Anywhere
  • Pega
  • Contextor
  • Nice Items

Ten years is a long time to forecast and RPA is a fairly young and developing market. Yet RPA has certainly proved it’s worth and will continue to expand rapidly. With these development tools, RPA deployment is becoming more of an area for creators of mobile robots, not just for casual business users. The RPA career is considered to be very successful. Emerging students should comfortably predict a substantial share of job opportunities in the country. Pay packages for specialists with skill sets in this area are often comparatively higher compared to other fields.

Industries that use RPA:

  • BPO.
  • Finance & Banking.
  • Insurance.
  • Healthcare.
  • Telecom.
  • Manufacturing.
  • Public Sector.
  • Retail & CPG.

Resources to learn RPA:

  • UiPath Academy. No learning cost its free for everyone, Complete UiPath RPA Developer Course.
  • Udemy. Complete UiPath RPA Developer Course.
  • Edureka!
  • IntelliPaat.
  • EpsilonAI Academy.

Hope you guys enjoyed this. Happy learning!

Emotion Recognition

Have you ever thought of any sort of interaction with any machine through emotion recognition? Yes, this is the area of the science which many want to uncover but still not able to encompass. With the constant advancement of Automated Emotion Evaluation(AEE), the emotion recognition technologies are trying to establish itself in the market. As we have a lot of advance technologies with us to make everything so easier and are still keen as mustard for more. This technology will definitely prove a boon for all of us.

Emotion recognition is a technique used in software that helps a computer to “sense” emotions on a human face through advanced image processing. Companies have been experimenting with integrating advanced algorithms with image processing techniques that have evolved in the last ten years to learn more about what the picture or video of a person’s face tells us about how he / she feels, and not only that, but also the possibility of mixed emotions a face may have.

AEE still influence a lot of great fields which are constantly developing like robotics, entertainment, education, and marketing.

  • in entertainment industries: to propose the most appropriate entertainment for the target audience
  • in education: used for improving learning processes, knowledge transfer, and perception methodologies
  • in marketing: to create specialized adverts, based on the emotional state of the potential customer
  • in robotics: to design smart collaborative or service robots which can interact with humans 

The literature presented in science attempts to classify the emotions, feelings, and set boundaries between emotions, mood, and their affects. According to the classifications done the definitions of some terms are:

  • “emotion” is a response of the organism to a particular stimulus (person, situation or event). Usually it is an intense, short duration experience and the person is typically well aware of it;
  • “affect” is a result of the effect caused by emotion and includes their dynamic interaction;
  • “feeling” is always experienced in relation to a particular object of which the person is aware; its duration depends on the length of time that the representation of the object remains active in the person’s mind;
  • “mood” tends to be subtler, longer lasting, less intensive, more in the background, but it can affect affective state of a person to positive or negative direction

The thesis also analyzes the concept of humanizing the Internet of Things and affective computing systems that have been validated by the systems developed by the authors of this analysis.Intelligent computers with human compassion are likely to make the planet a better place. The IoT sector is certainly moving ahead in recognizing human emotions thanks to advances in human emotion recognition (sensors and methods), computer vision, voice recognition, deep learning, and related technologies.

According to Stefan Winkler, CEO and Co-Founder of Opsis, the approach of his business is unique in that it provides fine-grained calculations in two dimensions: valence (positive vs. negative emotions) and anticipation (energy vs. passive expressions). This allows the machine to consider more emotions than the seven main ones – optimistic, sad, pleased, shocked, frightened, frustrated, and disgusted – in competing solutions.

Winkler noted that the understanding of feelings would only improve and improve the approval of consumers. “There have been several studies, such as Markets and Markets, that forecast that the Emotion Detection and Recognition Market will rise from US$ 6.72 billion in 2016 to US$ 36.07 billion by 2021, at a compound annual growth rate ( CAGR) of 39.9% between 2016 and 2021. Any recent high-profile acquisitions demonstrate the tremendous scope and increasing need for approaches for emotional identification. With all this high-profile takeover, A.I. is revealing. It’s set to grow, and these technologies are very much sorted out, “he said. “Our customers have been very receptive to this new avenue of recognition and understanding of the emotions of our customers.

Our clients, such as SP / SI, have shown interest in integrating feelings for a successful strategy and visualizing how consumers respond to their marketing strategies. OEM / SDK vendors are involved in integrating smart nation programs into their security approach. They expect that emotional awareness has a great potential to be incorporated in IoTs and Smart Nation for monitoring, wearable and end-sensing tools.

Web Development

When browsing the internet and forging, do you ever feel like making various web pages? Ok, if yes, you can probably go to web development. One of the basic skills that almost every technological enthusiast should learn, this skill is one of the most fascinating and easiest. Now, what’s the web development?

Web development refers to building, creating, and maintaining websites. It includes aspects such as web designweb publishing, web programming, and database management.

While the terms “web developer” and “web designer” are often used synonymously, they do not mean the same thing. Technically, a web designer only designs website interfaces using HTML and CSS. A web developer may be involved in designing a website, but may also write web scripts in languages such as PHP and ASP. Additionally, a web developer may help maintain and update a database used by a dynamic website.

Web development includes many types of web content creation. Some examples include hand coding web pages in a text editor, building a website in a program like Dreamweaver, and updating a blog via a blogging website. In recent years, content management systems like WordPress, Drupal, and Joomla have also become a popular means of web development. These tools make it easy for anyone to create and edit their website using a web-based interface.

Web Development has many terms associated with it like front-end. back-end, and full-stack developer. What are they and in which perspective are they used?

front end developer

A front-end developer is a person who is responsible for the looks and design of the website. The design of the site aims to ensure that, when users open the site, they see the information in a format that is easily readable and relevant. This is further complicated by the fact that consumers are now using a vast range of devices of different screen sizes and resolutions, thereby requiring the designer to take these considerations into account when constructing the web. They need to ensure that their site is correctly positioned in different browsers (cross-browser), different operating systems (cross-platform) and different devices (cross-device), which require careful planning on the developer’s side.

The front end section is constructed using some of the languages discussed below:

HTML: HTML is the HyperText Markup Language. It is used to build the front end portion of a web page using a markup language. HTML is a mixture of Hypertext and Markup. Hypertext describes a connection between a web page. The markup language is used to define the text documentation within the tag that defines the web page structure.

CSS: Cascading Style Sheets affectionately referred to as CSS is a simple language designed to simplify the process of making web pages presentable. CSS allows you to apply styles to your web pages. More significantly, CSS helps you to do this independent of HTML.

JavaScript: JavaScript is a well-known scripting language used to build magic on blogs that render the web interactive for the user. It is used to improve the functionality of a website to run cool games and web-based applications.

Front End Framework and libraries

AngularJS: AngularJs is a front-end open-source JavaScript platform that is predominantly used to build single-page web applications (SPAs). It is a constantly growing and evolving platform that offers better ways to build web applications. Changes static HTML to dynamic HTML. It is an open-source project that can be freely used and updated by anyone. It extends HTML attributes with Directives, and data is bound with HTML.

React.js: React is a declarative, efficient, and flexible JavaScript library for creating user interfaces. ReactJS is an open-source, component-based front end library responsible for the view layer of the application only. It’s being maintained by Facebook.

Bootstrap: Bootstrap is a free and open-source collection of tools for creating responsive websites and web applications. It is the most popular HTML, CSS, and JavaScript framework for the development of responsive, mobile-first websites.

jQuery: jQuery is an open-source JavaScript library that simplifies the interaction between an HTML / CSS document or, more precisely, a Document Object Model (DOM) and a JavaScript document. Developing terminology, jQuery simplifies HTML document traversing and handling, browser event handling, DOM animations, Ajax interactions, and JavaScript cross-browser creation.

SASS: is the most accurate, mature, and robust CSS extension language. It is used to expand the features of the current site CSS, including everything from variables, inheritance, and nesting to ease.

Certain libraries and frameworks are Semantic-UI, Framework, Materialize, Backbone.js, Express.js, Ember.js, etc.

back-end developer

Backend is the server-side of the web. It stores and arranges data, and also ensures that everything on the client-side of the website works fine. It’s the part of the website you can’t see and interact with. It’s the portion of the software that doesn’t come into direct contact with users. Parts and features developed by backend designers are accessed indirectly by users through a front-end application. Activities such as writing APIs, creating libraries, and working with system components without user interfaces or even science programming systems are also included in the backend.

Back-end Languages

The back end component is built using some of the languages discussed below:

PHP: PHP is a server-side scripting language built specifically for web creation. Since PHP code is running on the server-side, it is called the server-side scripting language.

C++: It is a general programming language and is now widely used for competitive programming. It’s also used as a backend script.

Java: Java is one of the most common and widely used programming languages and platforms. It’s very scalable. Java components are readily available.

Python: Python is a programming language that helps you to work quickly and implement systems more efficiently.

JavaScript: Javascript can be used as both (front and back end) programming languages.

Node.js: Node.js is an open-source and cross-platform runtime environment for running JavaScript code outside the browser. You need to remember that NodeJS is not a framework and is not a programming language. Most people are confused and understand that it’s a framework or a programming language. We also use Node.js to create back-end services like Web App or Mobile App APIs. It is used in the development of major corporations such as Paypal, Uber, Netflix, Wallmart, and so on.

Back-end Frameworks

The list of back end frames is Express, Django, Rails, Laravel, Spring, etc.

The other back end programs/scripting languages are: C #, Ruby, REST, GO, etc.

Difference between Frontend and Backend:

Frontend and backend developments are quite different from each other, but there are still two aspects of the same situation. The frontend is what users see and interact with, and backend is how it works.

The frontend is a part of the website that users can see and interact with, such as the graphical user interface ( GUI) and command line, including design, navigation menus, text, pictures, videos, etc. Backend, on the other hand, is where part of the website users are unable to see and communicate.

The visual aspects of the website that users can see and experience are front-end. On the other hand, everything that happens in the background can be attributed to the backend

The languages used for the front end are HTML, CSS, Javascript, while those used for the backend are Java, Ruby, Python, .Net.

full stack developer

A full-stack web developer is a person who can develop both client and server software. Besides mastering HTML and CSS, he/she also knows how to:

Browser software (such as JavaScript, jQuery, Angular, or Vue)

Programming a server (like using PHP, ASP, Python, or Node)

Program a database (such as SQL, SQLite, or MongoDB)

Being a full-stack developer is a good practice because you know almost every aspect of web development. You can switch between front-end and back-end stuff according to the requirement.

Resources to learn

  • W3 School(Free)
  • Coursera(Paid)
  • Udemy(Paid)
  • FreeCode Camp(Free)
  • Treehouse(Paid)
  • Codeacademy(Free)
  • Traversy Media(Free)
  • HTMLDog(Free)

So, all the best guys for this amazing learning journey, hope you guys find this piece informative.