How does the processor work?
The microprocessor is an extremely complex device (it is very difficult
to explain or understand the details about its inner workings),
but its functions can be broken down to simple tasks. The first
designs of the modern microprocessor were presented by John Von
Neumann; a Hungarian scientist who in 1945 proposed the “stored
program concept”; a novel idea at the time. The designs of
modern microprocessors have changed very little in principle from
the first designs that von Neumann proposed nearly 60 years ago.
All the tasks
of the microprocessor can be summarized as performing the instructions
given to it by the user and giving an output in return. The microprocessor
is analogous to a “little man” (Control Unit) locked
inside a room, with only two letterboxes (labelled “In”
and “Out”) that let him communicate with the outside
world. Inside the room, the “little man” would have
a ‘calculator’ (Arithmetic/Logic Unit) and a ‘file
cabinet’ with individual slots, each capable of containing
a number of limited size (Memory).
It is possible
for the “little man” to accept instructions through
the “In” box and output the results through the “Out”
box and this is essentially what happens in a computer. The “little
man” makes use of the memory he has as a temporary storage
for the instructions he receives and to store the results before
output while making use of the ‘calculator’ to perform
all necessary calculations. The “little man” can only
perform very simple instructions such as addition and subtraction,
and any other task with a higher level of complexity has to be broken
down to this level of simplicity.
This simple
language that the microprocessor can understand is called “Assembly
Language”, but even though it may be simple for the computer
to understand, programming in assembly language takes a lot of programming
effort. So, in order to overcome this problem scientists and mathematicians
have developed “high level” computer languages such
as C, C++, JAVA, etc., which are easier to use than Assembly language.
You are welcome to contribute your ideas views and comments to technopage_lk@yahoo.com
Multiple
instructions at one time
Did you know that a simple program written in Assembly language
to print “Hello!” on the computer screen, runs into
approximately 17 lines of code? The process of executing an instruction
by the microprocessor is done in many stages that repeat like a
cycle. These include fetching the instructions from memory, decoding
them, keeping track of instructions, calculation, memory management
and data transfer. It is a common perception that the microprocessor
can perform only one task at a time.
This is not
entirely wrong, but even though early processors only executed one
instruction at a time, their modern day counterparts can work with
multiple instructions at the same time. For example, while the computer
fetches an instruction (let’s say instruction a1), it can
decode another (a2), and perform the calculations of another (a3)
and so on.
This technology
(which is not as simple as it sounds) is called “Pipelining”
and that is literally what it does; it ‘pipelines’ or
links many instructions to the computer at different level so that
the computer can work on multiple instructions at the same time.
Apart from these, modern microprocessor manufacturers employ various
other techniques to beef up their products to make them more powerful
and efficient.
Some common
techniques are increasing the memory inside the CPU so to minimize
the time that is wasted during memory transfer (this will be explained
in our discussion about data busses and memory), streamlining the
memory management process and using predictive logic (NOT predicative)
to effectively ‘guess’ what the next instruction will
be. It was mentioned that the processor is capable of performing
only very simple tasks.
As the need
for making faster and more powerful computers increases some scientists
believe that it can be best achieved by enabling the computer to
process more and more complex tasks or the CISC (Complex Instruction
Set Computers) approach.
Yet others believe
that the way forward is to enable computers to process simpler instructions
at a faster rate or the RISC (Reduced Instruction set Computer)
approach. Both these arguments have valid points but it is my personal
view that the best option is to employ the positive aspects of both
approaches and to have a balanced approach.
I will not engage
myself in an argument as to what brand of microprocessors is superior,
but it seems that Intel is leading the market at the moment with
AMD a close second. Then again, who can underestimate the power
of the Motorola chips that power the iMac G4’s? So with that
we end our discussion about the microprocessor and next week, I
will introduce the motherboard and the system bus to stage.
Tell
me how I feel?
What if a computer could begin to understand what you’re feeling?
MIT’s Media Labs’ Affective Computing Group is developing
a system that will do just that. Physiological sensors attached
to your body and tiny cameras that record your facial expressions
lets the computer monitor your reactions.
Then an “affective
tutor” will adjust a program to react to your emotions. For
instance, if you’re confused by a complex part of a video
lecture, the tutor could play it back for you or offer an explanation.
Perhaps the
machine itself could express emotion. MIT has developed Bruzard,
an interactive animated 3-D character designed to look like a small
child. It uses facial expressions to react to your questions. In
the future, Bruzard could be hooked up to something like a ‘chatterbot’
to create a more human interface.
Microsoft Research
has combined many of these ideas into a concept called Flow. Researchers
believe computers should be about giving you back your time. Thus
Flow, which is still in the research stage, will allow you to sit
at your computer and take part in a virtual meeting. Life-like avatars
would represent you and your co-workers so it would look much like
a traditional meeting, even though everyone might be in different
locations.
And the entire
conversation would be recorded and converted to searchable text
for later use. One of the big challenges is modelling human attention,
so that you could pay attention when you wanted to and not when
you didn’t have to. These techniques are a long way from the
mainstream. But the combination of animation, natural-language processing,
voice recognition, and voice synthesis may very well result in user
interfaces that seem more natural than anything we have today.
|