Integrated Circuit (IC)

 



Previously, circuits were huge and cumbersome, consisting of circuit components such as resistors, capacitors, inductors, transistors, diodes, and other components coupled by copper wires. This aspect limited the circuits' use to large machinery. With these large circuits, it was impossible to make small and compact appliances. Furthermore, they were not completely shockproof or dependable.

                         

Necessity, as they say, is the mother of all inventions. As a result, smaller circuits with greater power and safety were needed to be integrated into gadgets. Three American scientists created transistors, which greatly simplified things, but the creation of integrated circuits completely transformed the face of electronics technology.

History of IC invention

In 1960, the first monolithic planar integrated circuit (IC) chip was demonstrated. When German physicist and engineer Werner Jacobi developed and patented the first known integrated transistor amplifier in 1949, and British radio engineer Geoffrey Dummer proposed to integrate a variety of standard electronic components in a monolithic semiconductor crystal in 1952, the idea of integrating electronic circuits into a single device was born. Harwick Johnson submitted a patent for a prototype IC a year later. Sidney Darlington and Yasuo Tarui (Electrotechnical Laboratory) presented comparable chip designs between 1953 and 1957, in which many transistors shared a single active region but were not electrically isolated from one another.Until late 1958, when a breakthrough occurred, these concepts could not be applied by the industry. Three people from three different businesses in the United States addressed three key issues that were preventing the fabrication of integrated circuits. Texas Instruments' Jack Kilby patented the integration idea, developed the first prototype ICs, and marketed them. Kilby's innovation was a hybrid integrated circuit (hybrid IC) chip, as opposed to a monolithic integrated circuit (monolithic IC) chip. Kurt Lehovec of Sprague Electric Company invented p–n junction isolation to electrically isolate components on a semiconductor crystal between late 1958 and early 1959. Fairchild Semiconductor's Robert Noyce designed the first monolithic IC chip. He presented a better form of insulation based on Jean Hoerni's planar process technology and established a means to link the IC components (aluminum metallization). A group of Jay Last's at Fairchild Semiconductor built the first working semiconductor IC on September 27, 1960, based on Noyce and Hoerni's concepts. The patent for Kilby's innovation was held by Texas Instruments, which initiated a patent dispute that ended in 1966 with a cross-licensing deal. Who created the IC is a point of contention. Kilby, Lehovec, Noyce, and Hoerni were named by the American press in the 1960s; the list was condensed to Kilby and Noyce in the 1970s. Kilby received the Nobel Prize in Physics in 2000 "for his contribution to the creation of the integrated circuit." Historians Leslie Berlin, Bo Lojek, and Arjun Saxena resurrected the notion of many IC inventors in the 2000s and amended Kilby's contribution. Rather of Kilby's hybrid IC, modern IC chips are built on Noyce's monolithic IC.



An integrated circuit, often known as an IC, is a compact semiconductor chip that contains a whole circuit. When compared to ordinary circuits, which are built up of distinct circuit components, it is extremely compact. Monolithic integrated circuits are the most frequent type of IC.

An Integrated Circuit (IC) is a microchip that contains thousands or hundreds of electrical components such as resistors, capacitors, and transistors. Oscillators, amplifiers, microprocessors, timers, and computer memory are all examples of ICs.

An integrated circuit is created using certain logic techniques and circuit designs. The two types of IC design are as follows:

Analog Design

Digital Design

Mixed Design


Digital Design: The digital design approach is used to create ICs that are utilized as computer memory (such as RAM and ROM). This style of design guarantees that the circuit density and overall efficiency are maximized. This approach is used to develop ICs that deal with binary input data such as 0 and 1. The stages involved in creating digital integrated circuits are depicted in the diagram below.

Analog Design :The analog design process is used to build integrated chips that are utilized as oscillators, filters, and regulators. When power dissipation, gain, and resistance must be ideal, this design technique is utilized.

Mixed Design :The analog and digital design ideas are combined in mixed design. Digital to Analog converters, Analog to Digital converters (D/A and A/D converters), and clock/timing ICs are all included in the mixed ICs.

A complicated layering of semiconductors, coppers, and other associated materials forms resistors, transistors, and other components in an integrated circuit. A die is a collection of these wafers that has been sliced and molded.The ICs' semiconductor wafers are delicate, and the connections between the layers are extremely complex. The ICs are packed because the IC die is too tiny to solder and connect to. The IC packaging transforms the fragile and small die into a recognizable black chip.The integrated circuit is encased in an IC package, which converts it into a device that can be readily connected. There are several varieties of packaging, each with its own size and mounting options. All integrated circuits (ICs) are polarized, and each pin in an IC has its own position and purpose. The first pin on integrated chips is indicated by a notch or a dot.Following the identification of the first pin, the subsequent PINs grow in counterclockwise order around the chip.



Semiconducting materials, such as silicon, are used to make integrated circuits. Because the integrated chip is so small and delicate, it is attached to a series oftiny gold and aluminum wires before being molded into a flat block of plastic or ceramic. Metal pins on the outside of the block connect to the wires within. The solid block keeps the chip cold and protects it from overheating.

IC's size

The integrated chip's size ranges from 1 square mm to more than 200 mm.

 IC's integration

Integrated chips receive their name from the fact that they integrate several devices on a single chip. A microcontroller is an integrated circuit (IC) that combines a microprocessor, memory, and interface into a single device.

Logic Gate Integrated Circuits

Logic gate integrated circuits (ICs) are combinational circuits that provide a logical output from a variety of input signals. Two to three inputs are possible, but only one output is possible.

ICs for timers

A Timer IC is made with precise timing cycles and a duty cycle of either 100% or 50%.

Amplifiers for Operational Use

An OpAmp, or Operational Amplifier, is a voltage amplifier with a high gain and a single-ended output that has a differential input.

Regulators of voltage

A voltage regulator IC maintains a steady DC output regardless of the DC input.

Integrated circuits changed the electronic industry, paving the way for gadgets like computers, CD players, TVs, and a slew of other household goods. Furthermore, the widespread use of chips aided in the dissemination of modern electronic gadgets to all corners of the globe. 


Li Fi Technology

 

LiFi is a wireless communication technique that sends data and coordinates between devices using light. During a 2011 TED Global address in Edinburgh, Harald Haas coined the phrase.

 At his 2011 TED Global Talk, Professor Harald Haas invented the term "Li-Fi," introducing the concept of "wireless data from every light." He is a co-founder of pureLiFi with Dr. Mostafa Afgani and is a Professor of Mobile Communications at the University of Edinburgh.

The technology is comparable to Wi-Fi in terms of end usage, with the main technological distinction being that Wi-Fi transmits data by inducing a voltage in an antenna using radio frequency, whereas Li-Fi transmits data by modulating light intensity. Li-Fi has the potential to carry data at rates of up to 100 Gbit/s in theory. The capacity of Li-Fi to operate securely in environments that are otherwise sensitive to electromagnetic interference (e.g., airline cabins, hospitals, and military) is a benefit. Several groups throughout the world are working on the technology.



 

Li-Fi is a light communication technology that can carry data at high rates throughout the visible, ultraviolet, and infrared spectrums. Only LED bulbs can transmit data in visible light at the moment. In a similar way to Wi-Fi, Li-Fi is a derivation of optical wireless communications (OWC) technology, which employs light from light-emitting diodes (LEDs) as a medium to enable network, mobile, high-speed communication. From 2013 to 2018, the Li-Fi market was expected to increase at an annual pace of 82 percent, reaching a value of more than $6 billion. However, the business has not evolved as expected, and Li-Fi remains a niche sector used mostly to evaluate technologies.

Visible light communications (VLC) operates by rapidly switching the current to the LEDs off and on, too quickly for the human eye to see, therefore there is no flickering. Although Li-Fi LEDs must be turned on to transfer data, they may be dimmed to below human vision while still providing enough light to transmit data. When using the visible spectrum, this is also a key bottleneck of the technology because it is limited to lighting and not well suited to mobile communication. Handover technologies, which allow roaming between different Li-Fi cells, may allow for a smooth transition between Li-Fi. Light waves cannot pass through barriers, resulting in a far narrower range and lesser hacking potential than Wi-Fi. Li-Fi does not require a direct line of sight to broadcast a signal; light reflected off walls may attain 70 Mbit/s. Li-Fi has the advantage of not creating electromagnetic interference in electromagnetic sensitive places such as airline cabins, hospitals, and nuclear power plants.  Both Wi-Fi and Li-Fi use the electromagnetic spectrum to transport data, however Li-Fi employs visible, ultraviolet, and infrared light, whereas Wi-Fi uses radio waves. While the US Federal Communications Commission has warned of a future spectrum crisis due to Wi-near-full Fi's capacity, Li-Fi has almost no capacity constraints. The visible light spectrum is 10,000 times bigger than the radio frequency spectrum as a whole. Researchers were able to achieve data speeds of about 224 Gbit/s, which was far faster than average high-speed internet in 2013. The cost of Li-Fi is predicted to be 10 times lower than that of Wi-Fi. The disadvantages include a limited range, inadequate dependability, and hefty installation costs. 



Bg-Fi is a Li-Fi system that consists of a mobile application and a basic consumer product, such as an IoT (Internet of Things) device, that includes a color sensor, microcontroller, and embedded software. The color sensor on the consumer goods receives light from the mobile device display and turns it into digital information. Light emitting diodes allow the consumer goods to connect with the mobile device in real time.

The communications cannot pass through walls or doors using the short wave radiation utilised by Li-Fi. This increases network security and makes it easier to manage network access. Access to a Li-Fi channel is limited to devices within the room as long as transparent objects like windows are covered.

Added an extra layer of tiny cells ('attocells') to wireless infrastructures. The averting of a radio frequency spectrum shortage (10,000 times more capacity). allowing for extremely high peak data rates (10 Gbps).

The Internet-of-Things (IoT) is being enabled (100 times more devices).Secure wireless communication has been significantly improved (reduced interception of signals).Enhanced energy-efficiency by combining data communication and illumination (100 times energy reduction) (100 times energy reduction).All health issues are completely gone.

The Metaverse imagination of mankind

 The name "metaverse" is a combination of the words "meta" and "universe." It's mostly used to allude to an upcoming future generation of the internet that's been dubbed Web 3.0. The growth of online 3-D or virtually integrated settings that give users with virtual reality and augmented reality experiences is predicted as the internet evolves.

Both existing and future integrated digital systems centered on virtual and augmented reality are referred to as the metaverse. It is widely regarded as the internet's next frontier, with the IT industry and other industries seeing it as a huge economic and financial potential.

Devices such as virtual reality headsets, digital glasses, smartphones, and other devices, according to the vision for the metaverse articulated by social media and technology companies, will allow users access to 3-D virtual or augmented reality environments where they can work, connect with friends, conduct business, visit remote locations, and access educational opportunities, all in an environment mediated by technology in new and immersive ways.



The metaverse encompasses a wide range of experiences. Instead, it refers to a series of immersive digital experiences that will be available to users in the future, allowing them to participate in a variety of activities in entirely digital environments. This may entail taking part in a big virtual reality 

multiplayer game using a VR headset, or experiencing integrated digital and physical settings such as location-specific immersive digital material from business visitors using digital glasses or smartphones.

The metaverse, then, is a collection of digital locations and experiences that are now being developed by corporations in order to provide more realistic and immersive digital encounters. From augmented reality collaboration platforms that can improve cooperation and integration to work productivity systems for remote teams that might, for example, allow real estate agents to organize virtual house tours, the technology has a wide range of possible applications. Some parts of the metaverse are currently implemented into internet-enabled video games like Second Life, Minecraft, and Fortnite. These games provide rich social and virtual experiences with a persistent virtual environment in which users from all over the world can participate at the same time. While not quite the same as virtual reality, the metaverse will provide more of it.

Many social media and tech corporations, like Meta Platforms (previously Facebook) and Microsoft, are significantly investing in Social VR in order to create platforms where people can interact socially or work remotely via platforms like Microsoft Teams.

Virtual Reality

A virtual world is a simulated environment that can be accessed by a large number of users who can use an avatar to explore the world concurrently and autonomously. The virtual environment provides the user with perceptual data as well as real-time activities and messages from other users, as well as their motions and gravity.



Virtual worlds are used in massive multiplayer online games to allow users to perform things like construct and alter the environment, as well as move between different areas inside the globe. Virtual worlds, according to those behind the metaverse, may be used for activities other than gaming, such as collaboration software and medical care. Synthetic worlds are another name for virtual worlds. A virtual reality headset presents realistic sights, sounds, and other sensations to a user within a virtual world. Virtual reality is now employed in video games, but it might also be used in virtual meetings, medical training, and military training in the future. Virtual reality equipment allows users to gaze around a virtual environment, move about it, and interact with items and other people.

Mixed reality

The merging of real and virtual worlds to provide new ways to engage with physical and digital environments and other users is known as mixed reality. You are not totally in the virtualworld or entirely in the real world in mixed reality; instead, you are somewhere along the 'virtuality continuum' between the actual and virtual worlds.

Place-specific simulations, such as 3D representations of charts or concepts projected to virtual reality headsets or glasses in a university lecture, or augmented reality in Pokemon Go, where users could see Pokemon they found in the real world via their mobile device's camera, are examples of mixed reality. Video games, education, military training, healthcare, and the integration of people and robotics are all possible uses for mixed reality.

Augmented Reality

In the same way that mixed reality offers an interactive method to explore real-world surroundings, augmented reality does the same. Augmented reality uses digital sensory enhancements such sights, sounds, sensory data, and olfactory data to improve the actual environment. The merging of real and virtual worlds, real-time interaction, and 3D renderings of both virtual and actual items are all elements of augmented reality. Allowing buyers to picture a product they are contemplating in an atmosphere that is similar to their own home is one example of how it may be used.

 Virtual Markets

The phrase "virtual economy" was initially applied to the trading or sale of virtual products in online games, especially large multiplayer online games. Players may buy products from each other and trade real money for game money in several of these games. Cryptocurrencies and non-fungible tokens can now be included in virtual economies. Many people predict that in the future, social media firms and other businesses will be able to develop their own virtual currencies, but authorities may limit their ability to do so.



Immersive virtual reality experiences, according to Meta Platforms, are the way people will connect on social media in the future. Meta's concept, on the other hand, is speculative and would rely on technology and server capacity that do not yet exist. It also anticipates widespread use of gear like virtual reality headsets and smart spectacles. Many firms will be developing the architectures, hardware, and software that will power the metaverse version of Web 3.0, hence the metaverse is predicted to revolutionise the IT industry. However, because the metaverse proposes to change things as simple as how consumers buy for groceries, navigate around a city, tour an apartment, and engage with businesses and advertisements as a consumer, it will also effect firms beyond the technological arena.

While it is unclear whether the metaverse's visions will come true or if the health, privacy, and regulatory concerns it has already raised will limit the scope of its implementation, it has the potential to disrupt a variety of industries and sectors by requiring them to spend more money on technology .

Renewable energy strategies for sustainable development

 


Renewable energy is defined as energy obtained from renewable sources. There are a variety of renewable energy sources available. Sunlight, wind, rain, tides, waves, biomass, geothermal heat, and so on are examples. These resources are refreshed on a regular basis and never run out.

Three primary technical advances are often included in Sustainable Energy Development Strategies: energy savings on the demand side, efficiency gains in energy production, and the substitution of fossil fuels with alternative renewable energy sources. As a result, strategies for integrating renewable sources into coherent energy systems influenced by energy savings and efficiency measures must be included in large-scale renewable energy implementation plans.

First and foremost, increasing the amount of renewable energy in the supply chain is a huge task. Renewable energy is seen as a valuable resource in many nations throughout the world, however as shown in, renewable energy accounts for less than 15% of worldwide primary energy supply, with hydropower and wood fuels accounting for the majority of renewable energy in poor countries. Renewable energy sources like wind and solar account for a very tiny portion of overall supply. However, there is a lot of promise. In certain places and nations, the percentage of renewable energy is much higher. The need for energy has risen dramatically in recent decades. There are two primary hurdles to renewable energy policies for long-term growth. The integration of a large percentage of intermittent resources into the energy system, particularly the power supply, is one problem. The other option is to incorporate transportation into the strategy. Based on the instance, explains the issues and proposes possible solutions to these problems.





 Since the first oil crisis in 1973, energy savings and efficiency improvements have been a key aspect of the country's energy policy. As a result, despite a 70% rise in GDP, it has been able to maintain the same primary fuel consumption for more than 30 years through energy saving and the growth of Combined Heat and Power (CHP) and district heating. Furthermore, renewable energy has replaced 14 percent of fossil fuels. Transportation and power use, as well as the heated room area, have all grown significantly during the same time period. It is possible toapply sustainable development plans that combine savings, efficiency improvements, and renewable energy sources. The Energy Agency analyzed the potential of renewable energy sources in 1996 as part of the data that forms the foundation of the energy strategy. Some potential appears to be overlooked. Offshore wind potential, which is highly dependent on technology advancement, is now greater and will continue to rise in the future as the size of wind turbines increases.

Traditionally, fossil fuels have been used to provide energy. It has relatively limited hydropower potential, and during the 1960s and 1970s, massive steam turbines near major towns dominated the electrical supply. However, during the first oil crisis, it rose to the top of the list in terms of CHP, energy saving, and renewable energy. As a result, the energy system has transformed from a position in 1972, when oil accounted for 92 percent of a total of 833 PJ, to today, when oil accounts for just 41 percent of 828 PJ. Transportation and power use, as well as the heated room area, have all grown significantly during the same time period. The percentage of power produced by CHP has increased in recent years. The combination of energy production from CHP and wind power is another issue. Until recently, CHP facilities were not run to balance variations in wind power, resulting in difficulties with excess energy output during periods of strong winds.





The Energy Agency established an expert panel in 2001 to look into the problem of surplus electricity generation caused by the high percentage of wind and CHP in the energy system. Aalborg University conducted a series of long-term energy system studies evaluating investments in more flexible energy systems for the year 2020 as part of the project. The EnergyPLAN energy system analysis computer model was used to conduct these assessments.

The analysis' goal is to see if a 100 percent renewable energy system is feasible, as well as to identify essential technology upgrades and implementation solutions.The EnergyPLAN energy system analysis model was used to compute all changes. As a result, each system's energy balance has been computed for each hour of the year, taking into consideration the intermittent nature of RES, capacity restrictions of flexible technologies, and ancillary service needs.TheEnergyPLAN model has been utilized in a number of other large-scale renewable energy integration assessments. The basic premise that sustainable development entails three main technological breakthroughs, namely energy savings on the demand side, is the beginning point for the study.

Improvements in energy efficiency, as well as the substitution of renewable energy sources for fossil fuels. As a result, the three technical developments listed below have been chosen for investigation.

Savings: A 10% reduction in the need for energy, district heating, and domestic and industrial heating.

Efficiency: A mix of increased CHP and improved efficiencies. CHP plants with better efficiencies have 50 percent electric production and 40 percent heat output. This can be accomplished in part by using fuel-cell technology, or in part by improving current steam-turbine/engine technologies. More CHP is defined as a conversion of 50% of individual dwelling and industry fuels into CHP, aided in part by district heating. It's worth noting that such technical advancements are minor in comparison to their full potential. As a result, it is both conceivable and reasonable to save more than 10% while also replacing more than 50% of energy with CHP, etc.

Flexible technologies: As savings, efficiency, and renewable energy become more essential, the problem of integration, as well as the issue of transportation, becomes more critical. According to a scenario outlined by Ris National Laboratory, oil for transportation is substituted by electric-city use. Vehicles that weigh less than 2 tons are converted into a hybrid of battery and hydrogen fuel cell vehicles. 20.8 TWh of oil is replaced by 7.3 TWh of electricity in this scenario. The same ratio was used to convert the reference scenario's total oil consumption of 50.7 TWh into 17.8 TWh of electricity usage. Within a week, the electricity demand was made flexible, with a maximum capacity of 3500 MW.

 

Three primary technical advancements are often included in Sustainable Energy Development Strategies: energy savings on the demand side, energy efficiencygains in the production side, and the replacement of fossil fuels with diverse renewable energy sources. As a result, strategies for integrating renewable sources into coherent energy systems influenced by energy savings and efficiency measures must be included in large-scale renewable energy implementation plans.

When a significant proportion of intermittent resources are combined with CHP and savings, however, developing sustainable energy strategies becomes a question of introducing and expanding flexible energy technologies as well as constructing integrated energy system solutions. To promote additional sustainable growth, such technological advancements are essential. The EnergyPLAN energy system analysis model was used to compute all of the adjustments. The energy balance of each system was computed for each hour of the year, taking into consideration the intermittent nature of RES, flexible technology capacity restrictions, and ancillary service needs.The following system flexibility enhancements have been identified as critical to converting the energy system to a 100 percent renewable system.First, alternative forms of transportation must be substituted for oil. Given the scarcity of biomass in Denmark, alternatives based on electricity have emerged as critical technology. Furthermore, such technologies improve the possibility of incorporating wind power into auxiliary services such as voltage and frequency maintenance in the electrical supply.

The inclusion of small CHP plants in the legislation, as well as the addition of heat pumps to the system, is the next major point. Such technologies are particularly important since they allow for a change in the ratio of electricity to heat demand while still retaining CHP's excellent fuel efficiency.The third crucial element is to incorporate electrolysers into the system while also allowing for the addition of wind turbines to the voltage and frequency control of the power supply. The estimates reveal that by combining 180 TJ/yr of biomass with 5000 MW photovoltaics and between 15 and 27 GW of wind power, the Danish energy system may be changed to a 100 percent renewable energy system. In the reference, 27 GW of wind power is required, but with cost reductions and efficiency gains, the required capacity is lowered to roughly 15 GW.By adding 500 MW/yr, 15 GW of wind power may be achieved. Currently, Danish manufacturers produce over 3000 MW of wind energy each year.