Table of Contents
What is Computer Architecture?
According to Mark Hill, he defined Computer Architecture as “the science and art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals”
INTRODUCTION
Historically, computers have progressed to lower-cost devices, enabling new and innovative uses that expand the market for these devices and brings about reduction in cost. This cycle was obvious with early centralized computer PCs, which were utilized at first to quickly ascertain equations, at that point developed to performing complex plan examination and robotized business forms. The proceeded with progress in silicon handling methods has brought about a few focal points. As the business moves from one procedure geometry to a littler geometry, the subsequent advantages can be uncovered in a few different ways. Typically, one can hope to see an exact device, littler die sizes, higher recurrence, lower power, and more significant returns. Much of the time, the improvement in die size enables silicon developers to include extra features into a gadget. This follow in a more prominent usefulness to the detriment of higher power and lower yields, even with the littler process geometry.
Computer designs have entered a watershed as the amount of system information created by client applications surpass the information handling limit of any individual PC end-framework. Performance evaluation is at the establishment of computer architecture innovative work. Contemporary microchips are intricate to the point that architect can’t design system frameworks dependent on instinct and straightforward models only. Satisfactory execution assessment techniques are completely significant to guide the innovative work process the correct way. Performance evaluation is non-trifling to numerous perspectives to, for example, picking remaining tasks at hand, choosing a suitable demonstrating or reenactment approach, running the model and deciphering the outcomes utilizing important measurements.
Performance may drive innovative work in a misguided course. There is need for design to decrease costs, spare power and increment execution in a multi-scale approach that has potential application from Nano scale to data center scale computer. Technology innovation has been quickly progressing in the course of recent decades, for the most part because of the popularity for individual and institutional PCs. While this pattern is required to proceed for a couple of years, Personal Computers (PCs) will gradually lose their situation as the fundamental driving variable behind the technology innovation.
TRENDS IN TECHNOLOGY
If an instruction set architecture is to be fruitful, it must be intended to endure fast changes in technology innovation. All things considered, an effective new guidance set engineering may decades ago—for instance, the center of the IBM centralized computer has been being used for over 40 years. An architect must make an arrangement for innovation changes that can expand the lifetime of a fruitful PC technology. To get ready for the advancement of a PC, the designer must know about fast changes in execution innovation. Four implementation technologies, which change at a dramatic pace, are critical to modern implementations:
- Integrated Circuit Logic Technology
- Semiconductor DRAM
- Magnetic Disk Technology
- Network Technology
These fast improvements in technologies shape the design of a computer that, with speed and technology enhancements, may have a life span of five or more years. Even within the frame of a single product cycle for a computing system (two years of design and two to three years of production), key technologies such as DRAM change sufficiently that the designer must plan for these changes. Generally, cost has diminished at about the rate at which thickness increments. In spite of the fact that innovation improves ceaselessly, the effect of these enhancements can be in discrete jumps, as a threshold that allows a new capability is reached.
Performance Trends: Bandwidth over Latency Bandwidth or Throughput is the aggregate sum of work done in a given time, for example, megabytes every second for a circle move. Conversely, Latency or reaction time is the time between the beginning and the finishing of an event, for example, milliseconds for a plate get to. A straightforward standard guideline is that transmission capacity develops by at any rate the square of the improvement in idleness.
Scaling of Transistor Performance and Wires. The expansion in transistor execution, in any case, is increasingly intricate. As highlight sizes shrivel, gadgets recoil quadratically in the flat measurement and furthermore contract in the vertical measurement. The therapist in the vertical measurement requires a decrease in working voltage to keep up right activity and dependability of the transistors. This mix of scaling variables prompts a mind boggling interrelationship between transistor execution and procedure highlight size. To a first estimate, transistor execution improves straightly with diminishing component size. The way that transistor tally improves quadratically with a direct improvement in transistor execution is both the test and the open door for which PC designers were made! In the beginning of chip, the higher pace of progress in thickness was utilized to move rapidly from 4-piece, to 8-piece, to 16-piece, to 32-piece microchips. All the more as of late, thickness enhancements have bolstered the presentation of 64-piece microchips just as a significant number of the advancements in pipelining and stores. When all is said in done, in any case, wire defer scales ineffectively contrasted with transistor execution, making extra difficulties for the architect. In the previous couple of years, wire postponement has turned into a significant plan restriction for huge coordinated circuits and is regularly more basic than transistor exchanging delay. Bigger and bigger divisions of the clock cycle have been devoured by the engendering postponement of sign on wires. In 2001, the Pentium 4 kicked off something new by designating 2 phases of its 20+-arrange pipeline only for engendering signals over the chip.
Trends in Power in Integrated CircuitsPower likewise gives difficulties as gadgets are scaled. To begin with, control must be gotten and appropriated around the chip, and current microchips utilize several pins and various interconnect layers for simply power and ground. Second, control is scattered as warmth and must be evacuated. For CMOS chips, the customary overwhelming vitality utilization has been in exchanging transistors, additionally called dynamic power. The power required per transistor is proportional to the product of the load capacitance of the transistor, the square of the voltage, and the frequency of switching, with watts being the unit: Powerdynamic = 1 /2 X Capacitive load X Voltage2 X Frequency switched Mobile devices care about battery life more than power, so energy is the proper metric, measured in joules: Energydynamic = Capacitive load X Voltage2Hence, dynamic power and energy are greatly reduced by lowering the voltage, and so voltages have dropped from 5V to just over 1V in 20 years. The capacitive load is a function of the number of transistors connected to an output and the technology, which determines the capacitance of the wires and the transistors. For a fixed task, slowing clock rate reduces power, but not energy.
Trends in Cost In spite of the fact that there are PC structures where costs will in general be less significant—explicitly supercomputers—cost-touchy plans are of developing centrality. Undoubtedly, in the previous 20 years, the utilization of innovation enhancements to lower cost, just as increment execution, has been a significant subject in the technology industry.
MAJOR TRENDS AFFECTING MICROPROCESSOR PERFORMANCE AND DESIGN
In a competitive processor, some of the major trends affecting microprocessor performance are:· Increasing number of Cores: Multi-core processors are alluded to as a lone computing part with at least two autonomous central processing unit (CPU) called “cores”. The multi-center processor empowers clients to have supported execution, improved power utilization and parallel handling that enables different undertakings to be performed at the same time. The advancement of microchips for work areas and workstations today is growing from core i3, core i5, and core i7 by and by. These outcomes in utilizing a few chips in the CPUs. In the year 2017, it is assessed that implanted processors will don 4,096 cores, servers will have 512 centers and work area chips will utilize 128 cores.· Clock Speed: Clock speed is characterized as the recurrence at which a processor executes guidelines or potentially information is prepared. The check speed is estimated in megahertz (MHz) or gigahertz (GHz). It is a quartz gem which vibrates and sends beats or heartbeat to every segment that is synchronized with it.
In present day innovation, most CPU keeps running in gigahertz go. The speed of the PC is quick when the recurrence of the chip is higher.· Number of Transistors: The quantity of transistors accessible on the microchip massively affects the exhibition of CPU. As indicated by Moore’s Law, the quantity of transistors on a chip generally copies at regular intervals. Thus, the scale gets littler and transistor checks increments at customary pace to give upgrades in coordinated circuit functionalities and execution while diminishing expenses. By expanding the quantity of transistors, it permits an innovation known as pipelining. The execution of guidance covers in the pipelined engineering. Most present day processors have different guidance decoders with its own special pipeline that permits various guidance streams, where one instruction is finished at each clock cycle with a ton of transistors utilized in the chip.
CONCLUSION
In processor configuration, there are a few different ways to improve the exhibition of the CPU: one is superscalar strategy which attempts to expand Instruction Level Parallelism (ILP), the other is multithreading approach misusing Thread Level Parallelism (TLP). Superscalar means executing different directions simultaneously while chip-level multithreading (CMT) executes guidelines from numerous strings inside one processor chip simultaneously. Then again, as parallel PCs become bigger and quicker, it ends up possible to tackle issues that recently took too long to even think about running. Parallel processing is utilized in a wide scope of fields, from bioinformatics to financial matters.
REFERENCES
- John L. Hennessey, David A Patterson [2007]. “Computer Architecture: A Quantitative Approach.” Fourth Edition. San Francisco.
- Dr. E Kesavulu Reddy, “The Performance trends in computer system architecture” Tirupati, India. August 12, 2016.
- Kevin Kettler, “Technology trends in computer architecture and their impact on power subsystems.” Texas, USA.
- Alireza Kaviani, “New Trends in Computer Technology” Xilinx Inc., San Jose, California, USA.