Uncategorized

Fibre Channel over IP (FCIP or FC/IP) Definition

Fibre Channel over IP (FCIP or FC/IP) Definition

Fibre Channel over IP (FCIP or FC/IP) Definition
Fibre Channel over IP (FCIP or FC/IP) Definition
Fibre Channel over IP (FCIP or FC/IP, also known as Fibre Channel tunneling or storage tunneling) is an Internet Protocol (IP)-based storage networking technology developed by the Internet Engineering Task Force (IETF). FCIP mechanisms enable the transmission of Fibre Channel (FC) information by tunneling data between storage area network (SAN) facilities over IP networks; this capacity facilitates data sharing over a geographically distributed enterprise. One of two main approaches to storage data transmission over IP networks, FCIP is among the key technologies expected to help bring about rapid development of the storage area network market by increasing the capabilities and performance of storage data transmission.

FCIP Versus iSCSI

The other method, iSCSI, generates SCSI codes from user requests and encapsulates the data into IP packets for transmission over an Ethernet connection. Intended to link geographically distributed SANs, FCIP can only be used in conjunction with Fibre Channel technology; in comparison, iSCSI can run over existing Ethernet networks. SAN connectivity, through methods such as FCIP and iSCSI, offers benefits over the traditional point-to-point connections of earlier data storage systems, such as higher performance, availability, and fault-tolerance. A number of vendors, including Cisco, Nortel, and Lucent have introduced FCIP-based products (such as switches and routers). A hybrid technology called Internet Fibre Channel Protocol (iFCP) is an adaptation of FCIP that is used to move Fibre Channel data over IP networks using the iSCSI protocols.

 

Uncategorized

InfiniBand Definition

InfiniBand Definition

InfiniBand Definition
InfiniBand Definition
InfiniBand is a type of communications link for data flow between processors and I/O devices that offers throughput of up to 2.5 gigabytes per second and support for up to 64,000 addressable devices. Because it is also scalable and supports quality of service (QoS) and failover, InfiniBand is often used as a server connect in high-performance computing (HPC) environments.The internal data flow system in most PCs and server systems is inflexible and relatively slow. As the amount of data coming into and flowing between components in the computer increases, the existing bus system becomes a bottleneck. Instead of sending data in parallel (typically 32 bits at a time, but in some computers 64 bits) across the backplane bus, InfiniBand specifies a serial (bit-at-a-time) bus. Fewer pins and other electrical connections are required, saving manufacturing cost and improving reliability. The serial bus can carry multiple channels of data at the same time in a multiplexing signal. InfiniBand also supports multiple memory areas, each of which can addressed by both processors and storage devices.The InfiniBand Trade Association views the bus itself as a switch because control information determines the route a given message follows in getting to its destination address. InfiniBand uses Internet Protocol Version 6 (IPv6), which enables an almost limitless amount of device expansion.With InfiniBand, data is transmitted in packets that together form a communication called a message. A message can be a remote direct memory access (RDMA) read or write operation, a channel send or receive message, a reversible transaction-based operation or a multicast transmission. Like the channel model many mainframe users are familiar with, all transmission begins or ends with a channel adapter. Each processor (your PC or a data center server, for example) has what is called a host channel adapter (HCA) and each peripheral device has a target channel adapter (TCA). These adapters can potentially exchange information that ensures security or work with a given Quality of Service level.The InfiniBand specification was developed by merging two competing designs, Future I/O, developed by Compaq, IBM, and Hewlett-Packard, with Next Generation I/O, developed by Intel, Microsoft, and Sun Microsystems.
Uncategorized

Internet Protocol Definition

Internet Protocol Definition

Internet Protocol Definition
Internet Protocol Definition

The Internet Protocol (IP) is the method or protocol by which data is sent from one computer to another on the Internet. Each computer (known as a host) on the Internet has at least one IP address that uniquely identifies it from all other computers on the Internet.

When you send or receive data (for example, an e-mail note or a Web page), the message gets divided into little chunks called packets. Each of these packets contains both the sender’s Internet address and the receiver’s address. Any packet is sent first to a gateway computer that understands a small part of the Internet. The gateway computer reads the destination address and forwards the packet to an adjacent gateway that in turn reads the destination address and so forth across the Internet until one gateway recognizes the packet as belonging to a computer within its immediate neighborhood or domain. That gateway then forwards the packet directly to the computer whose address is specified.

Because a message is divided into a number of packets, each packet can, if necessary, be sent by a different route across the Internet. Packets can arrive in a different order than the order they were sent in. The Internet Protocol just delivers them. It’s up to another protocol, the Transmission Control Protocol (TCP) to put them back in the right order.

IP is a connectionless protocol, which means that there is no continuing connection between the end points that are communicating. Each packet that travels through the Internet is treated as an independent unit of data without any relation to any other unit of data. (The reason the packets do get put in the right order is because of TCP, the connection-oriented protocol that keeps track of the packet sequence in a message.) In the Open Systems Interconnection (OSI) communication model, IP is in layer 3, the Networking Layer.

The most widely used version of IP today is Internet Protocol Version 4 (IPv4). However, IP Version 6 (IPv6) is also beginning to be supported. IPv6 provides for much longer addresses and therefore for the possibility of many more Internet users. IPv6 includes the capabilities of IPv4 and any server that can support IPv6 packets can also support IPv4 packets.

Uncategorized

Router Definition

Router Definition

Router Definition
Router Definition

In packet-switched networks such as the internet, a router is a device or, in some cases, software on a computer, that determines the best way for a packet to be forwarded to its destination.

A router connects networks. Based on its current understanding of the state of the network it is connected to, a router acts as a dispatcher as it decides which way to send each information packet. A router is located at any gateway (where one network meets another), including each point-of-presence on the internet. A router is often included as part of a network switch.

How does a router work?

A router may create or maintain a table of the available routes and their conditions and use this information along with distance and cost algorithms to determine the best route for a given packet. Typically, a packet may travel through a number of network points with routers before arriving at its destination. Routing is a function associated with the network layer (Layer 3) in the standard model of network programming, the Open Systems Interconnection (OSI) model. A Layer 3 switch is a switch that can perform routing functions.

An edge router is a device located at the boundary of a network that connects to other networks, wide area networks or the internet. For home and business computer users who have high-speed internet connections such as cable, satellite or DSL, a router can act as a hardware firewall. Many engineers believe that the use of a router provides better protection against hacking than a software firewall because no computer internet protocol addresses are directly exposed to the internet. This makes port scans (a technique for exploring weaknesses) essentially impossible. In addition, a router does not consume computer resources, which a software firewall does. Commercially manufactured routers are easy to install and are available for hard-wired or wireless networks.

Uncategorized

Protocol Definition

Protocol Definition

Protocol Definition
Protocol Definition

In information technology, a protocol is the special set of rules that end points in a telecommunication connection use when they communicate. Protocols specify interactions between the communicating entities.

Uncategorized

What is ECC memory?

What is ECC memory?

What is ECC memory?

For servers in businesses and data centers, it’s mission-critical to minimize errors in data, and that’s the purpose of ECC (Error Correcting Code) memory.

ECC is a method of detecting and then correcting single-bit memory errors. A single-bit memory error is a data error in server output or production, and the presence of errors can have a big impact on server performance.

There are two types of single-bit memory errors: hard errors and soft errors. Hard errors are caused by physical factors, such as excessive temperature variation, voltage stress, or physical stress brought upon the memory bits.

Soft errors occur when data is written or read differently than originally intended, such as variations in voltage on the motherboard, to cosmic rays or radioactive decay that can cause bits in the memory to flip. Since bits retain their programmed value in the form of an electrical charge, this type of interference can alter the charge of the memory bit, causing an error. In servers, there are multiple places where errors can occur: in the storage drive, in the CPU core, through a network connection, and in various types of memory.

For workstations and servers where errors, data corruption and/or system failure must be avoided at all cost, such as in the financial sector, ECC memory is often the memory of choice.

Here’s how ECC memory works. In computing, data is received and transmitted through bits — the smallest unit of data in a computer – which are expressed in binary code using either a one or zero.

When bits are grouped together, they create binary code, or “words,” which are units of data that are addressed and moved between memory and the CPU. For example, an 8-bit binary code is 10110001.

With ECC memory, there is an extra ECC bit, which is known as a parity bit. This extra parity bit makes the binary code read 101100010, where the last zero is the parity bit and is used to identify memory errors. If the sum of all the 1’s in a line of code is an even number (not including the parity bit), then the line of code is called even parity. Error-free code always has even parity. However, parity has two limitations: it is only able to detect odd numbers of errors (1, 3, 5, etc.) and allows even numbers of errors to pass (2, 4, 6, etc.). Parity also isn’t able to correct errors – it’s only able to detect them. That’s where ECC memory comes into play.

ECC memory uses parity bits to store an encrypted code when writing data to memory, and the ECC code is stored at the same time. When data is read, the stored ECC code is compared to the ECC code that was generated when the data was read. If the code that was read doesn’t match the stored code, it’s decrypted by the parity bits to determine which bit was in error, then this bit is immediately corrected. Syndrome tables are a mathematical way of identifying these bit errors and then correcting them.

As data is processed, ECC memory is constantly scanning code with a special algorithm to detect and correct single-bit memory errors.

In mission-critical industries, such as the financial sector, ECC memory can make a massive difference. Imagine you’re editing a client’s confidential account information and then exchanging this data with other financial institutions. As you’re sending the data, say a binary digit gets flipped by some type of electrical interference.

The binary code that the other financial institution would receive could be 100100011, which communicates different information than you originally intended – it’s an error. The third digit has been flipped from a 1 to a 0 due to the electrical interference. So, the sum of the first eight bits now totals 3 – which is not even parity, meaning the confidential data you sent is at risk of being corrupted (or your system is at risk of a system crash). However, if ECC memory is installed, it will be able to detect the error and correct it by changing the third binary digit back to a 1 (the original code).

By detecting and correcting single-bit errors, ECC server memory helps preserve the integrity of your data, prevent data corruption, and prevent system crashes and failures.

Use Crucial ECC memory in mission-critical servers and workstations.

Uncategorized

Power Supply Definition

Power Supply Definition

Power Supply Definition
A power supply is a hardware component that supplies power to an electrical device. It receives power from an electrical outlet and converts the current from AC (alternating current) to DC (direct current), which is what the computer requires. It also regulates the voltage to an adequate amount, which allows the computer to run smoothly without overheating. The power supply an integral part of any computer and must function correctly for the rest of the components to work.

You can locate the power supply on a system unit by simply finding the input where the power cord is plugged in. Without opening your computer, this is typically the only part of the power supply you will see. If you were to remove the power supply, it would look like a metal box with a fan inside and some cables attached to it. Of course, you should never have to remove the power supply, so it’s best to leave it in the case.

While most computers have internal power supplies, many electronic devices use external ones. For example, some monitors and external hard drives have power supplies that reside outside the main unit. These power supplies are connected directly to the cable that plugs into the wall. They often include another cable that connects the device to the power supply. Some power supplies, often called “AC adaptors,” are connected directly to the plug (which can make them difficult to plug in where space is limited). Both of these designs allow the main device to be smaller or sleeker by moving the power supply outside the unit.

Since the power supply is the first place an electronic device receives electricity, it is also the most vulnerable to power surges and spikes. Therefore, power supplies are designed to handle fluctuations in electrical current and still provide a regulated or consistent power output. Some include fuses that will blow if the surge is too great, protecting the rest of the equipment. After all, it is much cheaper to replace a power supply than an entire computer. Still, it is wise to connect all electronics to a surge protector or UPS to keep them from being damaged by electrical surges.

Uncategorized

ICMP Definition

ICMP Definition

ICMP Definition
ICMP Definition

Stands for “Internet Control Message Protocol.” When information is transferred over the Internet, computer systems send and receive data using the TCP/IP protocol. If there is a problem with the connection, error and status messages regarding the connection are sent using ICMP, which is part of the Internet protocol.

When one computer connects to another system over the Internet (such as a home computer connecting to a Web server to view a website), it may seem like a quick and easy process. While the connection may take place in a matter of seconds, there are often many separate connections that must happen in order for the computers to successfully communicate with each other. In fact, if you were to trace all the steps of an Internet connection using a traceroute command, it might surprise you that Internet connections are successful as often as they are. This is because for every “hop” along the way, the network must be functional and able to accept requests from your computer.

In cases where there is a problem with the connection, ICMP can send back codes to your system explaining why a connection failed. These may be messages such as, “Network unreachable” for a system that is down, or “Access denied” for a secure, password-protected system. ICMP may also provide routing suggestions to help bypass unresponsive systems. While ICMP can send a variety of different messages, most are never seen by the user. Even if you do receive an error message, the software you are using, such as a Web browser, has most likely already translated the message into simple (and hopefully less technical) language you can understand.

Uncategorized

CGI Definition

CGI Definition

CGI Definition
CGI Definition

CGI has two different meanings:

1) Common Gateway Interface, and

2) Computer Generated Imagery.

1) Common Gateway Interface

The Common Gateway Interface (CGI) is a set of rules for running scripts and programs on a Web server. It specifies what information is communicated between the Web server and clients’ Web browsers and how the information is transmitted.

Most Web servers include a cgi-bin directory in the root folder of each website on the server. Any scripts placed in this directory must follow the rules of the Common Gateway Interface. For example, scripts located in the cgi-bin directory may be given executable permissions, while files outside the directory may not be allowed to be executed. A CGI script may also request CGI environment variables, such as SERVER_PROTOCOL and REMOTE_HOST, which may be used as input variables for the script.

Since CGI is a standard interface, it can be used on multiple types of hardware platforms and is supported by several types Web server software, such as Apache and Windows Server. CGI scripts and programs can also be written in several different languages, such as C++Java, and Perl. While many websites continue to use CGI for running programs and scripts, developers now often include scripts directly within Web pages. These scripts, which are written in languages such as PHP and ASP, are processed on the server before the page is loaded, and the resulting data is sent to the user’s browser.

2) Computer Generated Imagery

In the computer graphics world, CGI typically refers to Computer Generated Imagery. This type of CGI refers to 3D graphics used in film, TV, and other types of visual media. Most modern action films include at least some CGI for special effects, while other movies, such as a Pixar animated films, are built completely from computer generated graphics.

 

Uncategorized

DAW Definition

DAW Definition

DAW Definition
DAW Definition

Stands for “Digital Audio Workstation.” A DAW is a digital system designed for recording and editing digital audio. It may refer to audio hardware, audio software, or both.

Early DAWs, such as those developed in the 1970s and 1990s, were hardware units that included a mixing console, data storage device, and an analog to digital converter (ADC). They could be used to record, edit, and play back digital audio. These devices, called “integrated DAWs,” are still used today, but they have largely been replaced by computer systems with digital audio software.

Today, a computer system is the central user interface of most DAWs. Most professional recording studios include one or more large mixing boards connected to a desktop computer. Home studios and portable studios may simply include a laptop with audio software and a recording interface.

Since computers have replaced most integrated DAWs, audio editing and post-production is now performed primarily with software rather than hardware. Several audio production programs, commonly called DAW software, are available for both Macintosh and Windows systems. Some common crossplatform titles include Avid Pro Tools, Steinberg Cubase, and Abelton Live. Other platform-specific DAW programs include Cakewalk SONAR for Windows and MOTU Digital Performer for Mac OS X.