Privacy
on the Internet
The Internet is a rich source of information, and this includes
scraps of information about individuals, including you and me. There
are many ways to gather information about an individual on the Internet.
This means that there are an equal number of measures that you can
take to reduce the amount of personal information that is available
to others.
Using free
dial-up services that provide you with a dynamic IP address and
require a limited amount of your personal information makes it more
difficult for others on the Internet to determine your identity.
Reduce the amount of information that your Web browser gives out
(e.g. disable cookies) and the amount of access that your Web browser
or e-mail client gives to others (e.g. disable JavaScript, don’t
interpret HTML in e-mail). Keep your Web browser and e-mail client
updated (vulnerabilities in Web browsers and e-mail clients can
allow a malicious individual to damage or pry into your computer).
Additionally,
do not run programs or open documents obtained from the Internet.
Executables can carry Trojan horse programs, giving an intruder
complete remote control of your computer. Word documents can carry
viruses that delete information on your computer.
As a precaution,
install Anti Virus software, obtain new virus definitions at least
once a week, and scan your computer for viruses immediately after
obtaining new virus definitions. Also consider installing a personal
firewall to protect a personal computer by restricting access to
it.
Finally, use
encryption whenever possible. For instance, you may have the option
to encrypt e-mail messages before sending them and even encrypt
data on your disks. Take steps wherever possible to make sure that
connections to your e-mail server are encrypted to protect your
e-mail password. The same applies to connections to commercial web-
sites to protect any personal information you provide. These simple
measures that may require a little effort to implement may save
you from headaches and sleepless nights.
Get to know
Steven Jobs
Steven
Paul was an orphan adopted by Paul and Clara Jobs of Mountain View,
California in February 1955. After school, Jobs attended lectures
at the Hewlett-Packard electronics firm as a summer employee. Another
employee at Hewlett-Packard was Stephen Wozniak. An engineering
whiz with a passion for inventing electronic gadgets, Wozniak at
that time was perfecting his “blue box,” an illegal
pocket-size telephone attachment that would allow the user to make
free long-distance calls. Going to work for Atari after leaving
Reed College, Jobs renewed his friendship with Steve Wozniak.
Steve Jobs’
innovative idea of a personal computer led him into revolutionizing
the computer hardware and software industry. When Jobs was twenty
one, he and his friend, Steve Wozniak, built a personal computer
called the Apple. The Apple changed people’s idea of a computer
from a gigantic and inscrutable mass of vacuum tubes only used by
big business and the government to a small box used by ordinary
people. Jobs’ software development for the Macintosh re-introduced
the windows interface and mouse technology.
Two years after
building the Apple I, Jobs introduced the Apple II. The Apple II
was the best buy in personal computers for home and small business
throughout the following five years. When the Macintosh was introduced
in 1984, it was marketed towards medium and large businesses. The
Macintosh took the first major step in adapting the personal computer
to the needs of the corporate work force. Workers lacking computer
knowledge accomplished daily office activities through the Macintosh’s
user-friendly windows interface.
Improve
your computer literacy
Real time
Meaning - occurring immediately; the term is used to describe
a number of different computer features. For example, real-time
operating systems are systems that respond to input immediately.
They are used for such tasks as navigation, in which the computer
must react to a steady flow of new information without interruption.
Most general-purpose operating systems are not real-time because
they can take a few seconds, or even minutes, to react.
Real time can
also refer to events simulated by a computer at the same speed that
they would occur in real life. In graphics animation, for example,
a real-time program would display objects moving across the screen
at the same speed that they would actually move. Webopedia.com
How search engines
work
Search
engines are the key to finding specific information on the vast
expanse of the World Wide Web. Without the use of sophisticated
search engines, it would be virtually impossible to locate anything
on the Web without knowing a specific URL, especially as the Internet
grows exponentially every day. But do you know how search engines
work? And do you know what makes some search engines more effective
than others?
There are basically
three types of search engines: Those that are powered by crawlers,
or spiders; those that are powered by human submissions; and those
that are a combination of the two.
Crawler-based
engines send crawlers, or spiders, out into cyberspace. These crawlers
visit a website, read the information on the actual site, read the
site’s Meta tags and also follow the links that the site connects
to. The crawler returns all that information back to a central depository
where the data is indexed. The crawler will periodically return
to the sites to check for any information that has changed, and
the frequency with which this happens is determined by the administrators
of the search engine.
Human-powered
search engines rely on humans to submit information that is subsequently
indexed and catalogued. Only information that is submitted is put
into the index. In both cases, when you query a search engine to
locate information, you are actually searching through the index
that the search engine has created; you are not actually searching
the Web. These indices are giant databases of information that is
collected and stored and subsequently searched. This explains why
sometimes a search on a commercial search engine, such as Yahoo!
or Google, will return results that are in fact dead links. Since
the search results are based on the index, if the index hasn’t
been updated since a Web page became invalid the search engine treats
the page as still an active link even though it no longer is. It
will remain that way until the index is updated.
So why will
the same search on different search engines produce different results?
Part of the answer to that is because not all indices are going
to be exactly the same. It depends on what the spiders find or what
the humans submitted. But more important, not every search engine
uses the same algorithm to search through the indices. The algorithm
is what the search engines use to determine the relevance of the
information in the index to what the user is searching for.
One of the
elements that a search engine algorithm scans for is the frequency
and location of keywords on a Web page. Those with higher frequency
are typically considered more relevant. But search engine technology
is becoming sophisticated in its attempt to discourage what is known
as keyword stuffing, or spamdexing.
Another common
element that algorithms analyse is the way that pages link to other
pages in the Web. By analysing how pages link to each other, an
engine can both determine what a page is about (if the keywords
of the linked pages are similar to the keywords on the original
page) and whether that page is considered “important”
and deserving of a boost in ranking. Just as the technology is becoming
increasingly sophisticated to ignore keyword stuffing, it is also
becoming savvier to Web masters who build artificial links into
their sites in order to build an artificial ranking.
Sent
in by Arshad Ali
|