Thursday, September 10, 2009

Handheld PC/ PalmTop

A Handheld PC, or H/PC for short, is a term for a computer built around a form factor which is smaller than any standard laptop computer. It is sometimes referred to as a Palmtop. The first handheld device compatible with desktop IBM personal computers of the time was the Atari Portfolio of 1989. Another early model was the Poqet PC of 1989 and the Hewlett Packard HP 95LX of 1991. Other MS DOS compatible hand-held computers also existed.

Some Handheld PCs run on Microsoft's Windows CE operating system, with the term also covering Windows CE devices released by the broader commercial market.

Tuesday, September 8, 2009

What make site attractive?


1. It's perfect for beginners because it takes the techno-babble away and makes the site-building experience smooth, enjoyable and frustration-free. And if, at any time, you decide to become more technical, the sitebuilder can easily keep up with your newly found skills.

2. Puts the fun back in creating something you can be proud of. In a few months you'll have a popular website, a thriving community or a profitable web business.

3. The SBI community welcomes you among thousands of other beginners eager to learn how to make a website. Their support, advice and motivation will prove invaluable in building your success pixel by pixel.

4. It gives you the tools, motivation, support and above all, the essential intelligence needed to make a website, one that shines for years to come.

5. Good money-back guarantee (they refund you if at any time you change your mind, whenever and whatever the reason -- yes, I know it sounds too good to be true and yes, I did ask them myself).

The Quick And Easy Way to Create Website

Taking shortcuts means using website builder software for help. These fall into two major categories:

1. Website builders that only help with the technical side (creating and publishing pages, hosting, domain names, etc). These can be both online and offline, free and paid site builders.

2. Website builders that also help with the non-technical, human side, the fun side (figuring out what to make the site about, attracting visitors, interacting with people, making money out of it, becoming popular thanks to it -- in essence, making it work).

Now, as a professional designer myself, I wouldn't touch a site builder with a barge -- mainly out of principle. But I do have good reasons not to use them. I can't complain about the free website builders (precisely because they're free) but some of the commercial site makers I've seen make me cringe.

They are complicated to use, redundant, most don't give a website a fighting chance and are ultimately useless. Learning to make a website with these paid site builders is frankly, frustrating and not worth the trouble (or the money). Most of the time, free blogs or free website builders do the job just as well, or even better!

If you need to create a simple website for your family and friends, then free options will work just fine. However, if you plan to sell your products online or attract people to a web site about something you feel passionate about and make it wildly popular, then it's important to look at a website builder that can really help you achieve that (instead of a website that only takes up space).A few months ago, I came across website holder and, for once, I was pleasantly surprised. Being a webmaster myself, I am reluctant to saying this, but software like this one could, potentially, turn the web designers of today into a dying breed. But I was skeptical at first. Very skeptical. I loathe companies who are quick to make a buck but give you little in return. So I started investigating.

Hard Disk

The hard disk is where all data is stored - the operating system, ancillary programs, and HTML/images/movies/etc for every webpage.

The hard disk is an often overlooked bottleneck in the server architecture. While many people correctly focus on the CPU and memory constraints, they incorrectly only focus on the hard disk size. This is a misunderstanding of how a hard disk operates.

Just like RAM, a hard disk not only varies in size, but also in speed of data accessed. Unlike RAM, hard disk space is cheap - Adding another hard disk or getting a bigger hard disk is not a big expense. What is really important is how fast the hard disk responds.
A hard disk has three main stats - its storage space, its seek time (how long it takes to find data), and its RPM (how 'fast' the hard disk operates). In a server situation, the seek time and RPM become increasingly important.

Before further elaborating, we have to quickly mention IDE/SATA vs SCSI. Most desktop computers feature IDE or SATA hard disks. The IDE/SATA nomenclature refers to how the hard disk interacts with the rest of the computer. A higher performance solution is SCSI (pronounced 'scuzzy'). For servers, it is recommended that SCSI drives are used.
Now, going back to our earlier discussion of seek time and RPM, most desktop computers have hard disks with seek times of roughly 8 ms. SCSI drives clock in at around 3 ms - this means data is found in 50% of the time!

Regarding RPM (revolution per minute), an IDE/SATA drive usually goes at 7200 RPM, with some high-end versions going at 10,000 RPM. SCSI hard disks come in at 10,000 and 15,000 RPM. Combined with the earlier seek time, this means that SCSI drives not only find data faster, but also get the data faster.

For the sake of completeness, a fourth factor to consider is throughput - the speed at which data is transferred from the hard disk to the CPU. IDE/SATA solutions peak at around 100 Mbps. SCSI drives can obtain speeds up to 320 Mbps.

SCSI solutions also have other advantages. These include higher mean time between failures (MTBF - an estimate of how long the HD will properly work), more advanced controls for data integrity, less server resources utilized, and also larger cache size. Lastly, SCSI drives can be changed together much easier when compared to an IDE/SATA solution.

Central processing unit

A Central Processing Unit (CPU) or processor is an electronic circuit that can execute computer programs, which are actually sets of instructions. This term has been in use in the computer industry at least since the early 1960s (Weik 2007). The form, design and implementation of CPUs have changed dramatically since the earliest examples, but their fundamental operation remains much the same.

Early CPUs were custom-designed as a part of a larger, sometimes one-of-a-kind, computer. However, this costly method of designing custom CPUs for a particular application has largely given way to the development of mass-produced processors that are made for one or many purposes. This standardization trend generally began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of these digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from automobiles to cell phones and children's toys

How do I know Which Host is Going to Work for Me?

Every business has different requirements for their website and that's why we discuss topics like mySQL, PHP, ASP, .NET, Ecommerce, Dedicated, Shared and VPS solutions. You may be looking for 5,000 Mb of disk space or 35,000 Mb of disk space, either way, we've got you covered. You can also compare bandwidth. If you are seeking a web hosting service that offers 250Gb or 3500 Gb of bandwidth, we have a web hosting provider for you. For as little as $5.95 per month, you can have a full service web hosting provider that includes features like unlimited POP email and 35,000 Mb of disk space.
If you are looking for a trustworthy web hosting company to find the best deal, then you have come to the right place. Let FirstWebHosting be your best service-search partner, as we are highly likely to make finding dependable and inexpensive web hosting much easier. It is a well-known fact that nowadays there is a great number of web hosting providers within the web host world offering the top-notch services. How to select the company you can easily rely on? Acquaintance with our best service tips will help you a great deal not to make the host hot search your headache. Find out and read more about web host that is sure to work just for you.
Firstwebhosting.net is pretty well familiar with the free web hosting and cheap web hosting companies to suit your needs. We will provide you with the instructive reviews where you will find all necessary web host search tools. These tools will be of great help when it comes to identifying all comprehensive service search requirements that will suit your budget

Monday, September 7, 2009

Multiple Domain Web Hosting/All Computer News

There are several ways to manage multiple domains so it is important to know what your options are and the advantages and disadvantages of each method.

The most basic choice when administering multiple domains is whether or not to do so with the same host. Most hosts offer packages which can be set up to allow several sites on one account, or allow individual sites to be operated under separate accounts.You may have an existing website and are happy with the services your host provides. If you decide to start a new website using the same host, you will have the reassurance of dealing with a company you are familiar with and trust. You may also get a discount for each additional account you open with the same host. On the other hand, using a new host for a second (or third or fourth) website can allow you to compare the quality of hosting offered by different companies. In addition, separate hosts will provide each of your web sites with a different IP address.
Having different IP addresses can be an important factor if you plan to link the sites together to aid in search engine optimization. Incoming links are an important indicator of the importance of a website, so a site with a lot of incoming links can get a higher position in search engines like Google. If all the links are coming from the same IP address, however, their value may be discounted. Hosting your various sites with different hosting companies guarantees that each site has a different IP address. Individual IP addresses, however, are available as an extra from most web hosts. For a yearly fee each website can have its own unique IP address. This can help with search engine ranking and is also needed if you want to have a secure connection (https) on your site.
If you decide to host all your sites with the same company, there are three basic ways to go. Each site could have its own account; you could sign up for a re-seller account; or you could get a dedicated server account.As a re-seller you are acting as an agent for the hosting company. You are allocated a certain amount of disk space and bandwidth and you are free to use them as you please. There may be a limit to the number of websites you can host with your re-seller account, but if there is space left over after using this account for your own sites you could earn some extra income by selling accounts to other people. The advantage of a re-seller account is that all the technical details are taken care of by the hosting company. Some will even provide gateways for billing your customers.
A dedicated server account gives you control of all the resources of an entire server. You are free to setup as many websites as you wish and allocate disk space and bandwidth as you see fit. The downside to this type of account is that you are responsible for maintaining the server. This can require a significant level of technical know-how so if you don’t have that knowledge or don’t feel like learning about it, dedicated servers are not for you. You can, of course, go with a managed dedicated server. The downside of this is the higher cost involved.Hosting all your sites with one host can offer lots of advantages but there is one major disadvantage – if your server goes down, all your sites go down. If you are depending on your sites for income this can be a disastrous situation. For this reason, it is a good idea to have at least one of your sites with a different host. If your sites are essential for your livelihood and you can’t afford any down time whatsoever, you would be advised to host everything with (at least) two hosts.

Thursday, September 3, 2009

HP TouchSmart tx2-1275dx

The good: Good price for tablet functionality; multitouch gestures are fun; flashy but not garish design.

The bad: Poor battery life; mediocre application performance; weighted down with bloatware; a tad heavy for a 12-inch ultraportable.



The bottom line: A fair price, an attractive design, and multitouch support may allow tablet shoppers to overlook the HP TouchSmart tx2-1275dx's middling performance and poor battery life.


Specifications: Processor: AMD Turion X2 mobile processor (2.2 GHz) ; RAM installed: 4 GB DDR2 SDRAM ; Hard drive size: 320 GB ; See full specs

Wednesday, September 2, 2009

Networking and the Internet

Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems like Sabre.

In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. This effort was funded by ARPA (now DARPA), and the computer network that it produced was called the ARPANET. The technologies that made the Arpanet possible spread and evolved.

In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.

Multiprocessing

Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed only in large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result.

Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks

Input/Output

I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.

Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics[citation needed]. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O.

Memory

A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595". The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers.

In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.

Computer main memory comes in two principal varieties: random-access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.

In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.

Arithmetic/logic unit (ALU)

The ALU is capable of performing two classes of operations: arithmetic and logic.

The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers—albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?").

Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful both for creating complicated conditional statements and processing boolean logic.

Superscalar computers may contain multiple ALUs so that they can process several instructions at the same time.Graphics processors and computers with SIMD and MIMD features often provide ALUs that can perform arithmetic on vectors and matrices.

Control Unit of Computer

The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into a series of control signals which activate other parts of the computer. Control systems in advanced computers may change the order of some instructions so as to improve performance.

A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.
The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of
CPU:
Read the code for the next instruction from the cell indicated by the program counter.
Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
Increment the program counter so it points to the next instruction.
Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
Provide the necessary data to an ALU or register.
If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.

Write the result from the ALU back to a memory location or to a register or perhaps an output device.

Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).
It is noticeable that the sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program—and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer that runs a microcode program that causes all of these events to happen.