Electronic Circuit Calculators and Unit Converters
The information on this page is provided in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Use the information at your own risk.
You can use some letters at the end of the value as power modifiers.
All digital circuits require regulated power supply. In this article we are going to learn how to get a regulated positive supply from the mains supply.
Figure 1 shows the basic block diagram of a fixed regulated power supply. Let us go through each block.
A transformer consists of two coils also called as “WINDINGS” namely PRIMARY & SECONDARY.
They are linked together through inductively coupled electrical conductors also called as CORE. A changing current in the primary causes a change in the Magnetic Field in the core & this in turn induces an alternating voltage in the secondary coil. If load is applied to the secondary then an alternating current will flow through the load. If we consider an ideal condition then all the energy from the primary circuit will be transferred to the secondary circuit through the magnetic field.
The secondary voltage of the transformer depends on the number of turns in the Primary as well as in the secondary..
A rectifier is a device that converts an AC signal into DC signal. For rectification purpose we use a diode, a diode is a device that allows current to pass only in one direction i.e. when the anode of the diode is positive with respect to the cathode also called as forward biased condition & blocks current in the reversed biased condition.
Rectifier can be classified as follows:
1) Half Wave rectifier.
This is the simplest type of rectifier as you can see in the diagram a half wave rectifier consists of only one diode. When an AC signal is applied to it during the positive half cycle the diode is forward biased & current flows through it. But during the negative half cycle diode is reverse biased & no current flows through it. Since only one half of the input reaches the output, it is very inefficient to be used in power supplies.
2) Full wave rectifier.
Half wave rectifier is quite simple but it is very inefficient, for greater efficiency we would like to use both the half cycles of the AC signal. This can be achieved by using a center tapped transformer i.e. we would have to double the size of secondary winding & provide connection to the center. So during the positive half cycle diode D1 conducts & D2 is in reverse biased condition. During the negative half cycle diode D2 conducts & D1 is reverse biased. Thus we get both the half cycles across the load.
One of the disadvantages of Full Wave Rectifier design is the necessity of using a center tapped transformer, thus increasing the size & cost of the circuit. This can be avoided by using the Full Wave Bridge Rectifier.
3) Bridge Rectifier.
As the name suggests it converts the full wave i.e. both the positive & the negative half cycle into DC thus it is much more efficient than Half Wave Rectifier & that too without using a center tapped transformer thus much more cost effective than Full Wave Rectifier.
Full Bridge Wave Rectifier consists of four diodes namely D1, D2, D3 and D4. During the positive half cycle diodes D1 & D4 conduct whereas in the negative half cycle diodes D2 & D3 conduct thus the diodes keep switching the transformer connections so we get positive half cycles in the output.
If we use a center tapped transformer for a bridge rectifier we can get both positive & negative half cycles which can thus be used for generating fixed positive & fixed negative voltages.
Even though half wave & full wave rectifier give DC output, none of them provides a constant output voltage.For this we require to smoothen the waveform received from the rectifier. This can be done by using a capacitor at the output of the rectifier this capacitor is also called as “FILTER CAPACITOR” or “SMOOTHING CAPACITOR” or “RESERVOIR CAPACITOR”. Even after using this capacitor a small amount of ripple will remain.
We place the Filter Capacitor at the output of the rectifier the capacitor will charge to the peak voltage during each half cycle then will discharge its stored energy slowly through the load while the rectified voltage drops to zero, thus trying to keep the voltage as constant as possible.
If we go on increasing the value of the filter capacitor then the Ripple will decrease. But then the costing will increase. The value of the Filter capacitor depends on the current consumed by the circuit, the frequency of the waveform & the accepted ripple.
Vr= accepted ripple voltage.( should not be more than 10% of the voltage)
I= current consumed by the circuit in Amperes.
F= frequency of the waveform. A half wave rectifier has only one peak in one cycle so F=25hz
whereas a full wave rectifier has Two peaks in one cycle so F=100hz.
A Voltage regulator is a device which converts varying input voltage into a constant regulated output voltage. voltage regulator can be of two types
1) Linear Voltage Regulator
Also called as Resistive Voltage regulator because they dissipate the excessive voltage resistively as heat.
2) Switching Regulators.
They regulate the output voltage by switching the Current ON/OFF very rapidly. Since their output is either ON or OFF it dissipates very low power thus achieving higher efficiency as compared to linear voltage regulators. But they are more complex & generate high noise due to their switching action. For low level of output power switching regulators tend to be costly but for higher output wattage they are much cheaper than linear regulators.
The most commonly available Linear Positive Voltage Regulators are the 78XX series where the XX indicates the output voltage. And 79XX series is for Negative Voltage Regulators.
After filtering the rectifier output the signal is given to a voltage regulator. The maximum input voltage that can be applied at the input is 35V.Normally there is a 2-3 Volts drop across the regulator so the input voltage should be at least 2-3 Volts higher than the output voltage. If the input voltage gets below the Vmin of the regulator due to the ripple voltage or due to any other reason the voltage regulator will not be able to produce the correct regulated voltage.
|Don't just sit there! Build something!!|
It has been my experience that students require much practice with circuit analysis to become proficient. To this end, instructors usually provide their students with lots of practice problems to work through, and provide answers for students to check their work against. While this approach makes students proficient in circuit theory, it fails to fully educate them.
Students don't just need mathematical practice. They also need real, hands-on practice building circuits and using test equipment. So, I suggest the following alternative approach: students should build their own "practice problems" with real components, and try to predict the various logic states. This way, the digital theory "comes alive," and students gain practical proficiency they wouldn't gain merely by solving Boolean equations or simplifying Karnaugh maps.
Another reason for following this method of practice is to teach students scientific method: the process of testing a hypothesis (in this case, logic state predictions) by performing a real experiment. Students will also develop real troubleshooting skills as they occasionally make circuit construction errors.
Spend a few moments of time with your class to review some of the "rules" for building circuits before they begin. Discuss these issues with your students in the same Socratic manner you would normally discuss the worksheet questions, rather than simply telling them what they should and should not do. I never cease to be amazed at how poorly students grasp instructions when presented in a typical lecture (instructor monologue) format!
I highly recommend CMOS logic circuitry for at-home experiments, where students may not have access to a 5-volt regulated power supply. Modern CMOS circuitry is far more rugged with regard to static discharge than the first CMOS circuits, so fears of students harming these devices by not having a "proper" laboratory set up at home are largely unfounded.
A note to those instructors who may complain about the "wasted" time required to have students build real circuits instead of just mathematically analyzing theoretical circuits:
What is the purpose of students taking your course?
If your students will be working with real circuits, then they should learn on real circuits whenever possible. If your goal is to educate theoretical physicists, then stick with abstract analysis, by all means! But most of us plan for our students to do something in the real world with the education we give them. The "wasted" time spent building real circuits will pay huge dividends when it comes time for them to apply their knowledge to practical problems.
Furthermore, having students build their own practice problems teaches them how to perform primary research, thus empowering them to continue their electrical/electronics education autonomously.
In most sciences, realistic experiments are much more difficult and expensive to set up than electrical circuits. Nuclear physics, biology, geology, and chemistry professors would just love to be able to have their students apply advanced mathematics to real experiments posing no safety hazard and costing less than a textbook. They can't, but you can. Exploit the convenience inherent to your science, and get those students of yours practicing their math on lots of real circuits!
Knowing that the UJT forms an oscillator, it is tempting to think that the load will turn on and off repeatedly. The first sentence in the answer explains why this will not happen, though.
I got the basic idea for this circuit from the second edition of Electronics for Industrial Electricians, by Stephen L. Herman.
Although it may seem premature to introduce the 555 timer chip when students are just finishing their study of DC, I wanted to provide a practical application of RC circuits, and also of algebra in generating useful equations. If you deem this question too advanced for your student group, by all means skip it.
Incidentally, I simplified the diagram where I show the capacitor discharging: there is actually another current at work here. Since it wasn't relevant to the problem, I omitted it. However, some students may be adept enough to catch the omission, so I show it here:
This popular configuration of the 555 integrated circuit is well worth spending time analyzing and discussing with your students.
This question really probes students' conceptual understanding of the 555 timer, used as an astable multivibrator (oscillator). If some students just can't seem to grasp the function of the diode, illuminate their understanding by having them trace the charging and discharging current paths. Once they understand which way current goes in both cycles of the timer, they should be able to recognize what the diode does and why it is necessary.
Have your students show you how they mathematically derived their answer based on their knowledge of how capacitors charge and discharge. Many textbooks and datasheets provide this same equation, but it is important for students to be able to derive it themselves from what they already know of capacitors and RC time constants. Why is this important? Because in ten years they won't remember this specialized equation, but they will probably still remember the general time constant equation from all the time they spent learning it in their basic DC electricity courses (and applying it on the job). My motto is, "never remember what you can figure out."
Practical applications abound for such a circuit. One whimsical application is to energize sequential tail-light bulbs for an automobile, to give an interesting turn-signal visual effect. A sequential timer circuit was used to do just this on certain years of (classic) Ford Cougar cars. Other, more utilitarian, applications for sequential timers include start-up sequences for a variety of electronic systems, traffic light controls, and automated household appliances.
The purpose of this question is to approach the domain of circuit troubleshooting from a perspective of knowing what the fault is, rather than only knowing what the symptoms are. Although this is not necessarily a realistic perspective, it helps students build the foundational knowledge necessary to diagnose a faulted circuit from empirical data. Questions such as this should be followed (eventually) by other questions asking students to identify likely faults based on measurements.
The purpose of this question is to approach the domain of circuit troubleshooting from a perspective of knowing what the fault is, rather than only knowing what the symptoms are. Although this is not necessarily a realistic perspective, it helps students build the foundational knowledge necessary to diagnose a faulted circuit from empirical data. Questions such as this should be followed (eventually) by other questions asking students to identify likely faults based on measurements.
Ask your students to explain why frequency and amplitude changes in this circuit. It is far too easy for a student to simply repeat the answer given by the worksheet! Hold your students accountable to reasoning through the operation of a circuit like this.
Every year it seems I have at least one student who experiences this particular problem, usually as a result of hasty circuit assembly (not making all necessary connections to pins on the chip). This is a good question to brainstorm with your class on, exploring possible causes and methods of diagnosis.
Be sure to discuss the reasons why each of your students' proposed component faults would cause the final output to never go high. The possibilities range from the obvious to the obscure, and exploring them will strengthen your students' understanding of the 555 as a monostable multivibrator.
There is much literature available discussing PWM power control, and its advantages over linear power control. Your students should have no difficulty finding it on their own!
Discuss with them the proposed solution to the high-voltage motor problem. What purpose(s) do/does the solid-state relay serve? Is there a way to achieve PWM control over the motor without using an optocoupled device? If so, how? Let your students show their solutions and discuss the practicality of each.
Decoupling power supply pins on a chip is important, but here students get to see another variation of decoupling. If time permits, work through a sample problem with your students sizing capacitor C2, given a certain operating frequency of the astable circuit. Note: this will give you another opportunity to use Thévenin's Theorem . . .
Discuss with your students why such devices exist, in light of the existence of 555 timers. Why couldn't a 555 timer be used for the same purpose as the 74LS31?
Duty cycle is a very important concept, as analog information may be conveyed through the variable duty cycle of an otherwise digital pulse waveform. Discuss this application with your students, if time permits.
This question is a simple exercise in researching a component datasheet.
Have your students show you how they mathematically derived their answer based on their knowledge of how capacitors charge and discharge. Many textbooks and datasheets provide this same equation, but it is important for students to be able to derive it themselves from what they already know of capacitors and RC time constants. Why is this important? Because in ten years they won't remember these specialized equations, but they will probably still remember the general time constant equation from all the time they spent learning it in their basic DC electricity courses (and applying it on the job). My motto is, "never remember what you can figure out."
The challenge questions are worthwhile to do in class, even if few students were able to derive them on their own. If nothing else, such an exercise reviews the meaning of "frequency" and its relationship with period, as well as the definition of "duty cycle."
PARTS AND MATERIALS
Large-value capacitors are required for this experiment to produce time constants slow enough to track with a voltmeter and stopwatch. Be warned that most large capacitors are of the "electrolytic" type, and they are polarity sensitive! One terminal of each capacitor should be marked with a definite polarity sign. Usually capacitors of the size specified have a negative (-) marking or series of negative markings pointing toward the negative terminal. Very large capacitors are often polarity-labeled by a positive (+) marking next to one terminal. Failure to heed proper polarity will almost surely result in capacitor failure, even with a source voltage as low as 6 volts. When electrolytic capacitors fail, they typically explode, spewing caustic chemicals and emitting foul odors. Please, try to avoid this!
I recommend a household light switch for the "SPST toggle switch" specified in the parts list.
Lessons In Electric Circuits, Volume 1, chapter 13: "Capacitors"
Lessons In Electric Circuits, Volume 1, chapter 16: "RC and L/R Time Constants"
Build the "charging" circuit and measure voltage across the capacitor when the switch is closed. Notice how it increases slowly over time, rather than suddenly as would be the case with a resistor. You can "reset" the capacitor back to a voltage of zero by shorting across its terminals with a piece of wire.
The "time constant" (τ) of a resistor capacitor circuit is calculated by taking the circuit resistance and multiplying it by the circuit capacitance. For a 1 kΩ resistor and a 1000 µF capacitor, the time constant should be 1 second. This is the amount of time it takes for the capacitor voltage to increase approximately 63.2% from its present value to its final value: the voltage of the battery.
It is educational to plot the voltage of a charging capacitor over time on a sheet of graph paper, to see how the inverse exponential curve develops. In order to plot the action of this circuit, though, we must find a way of slowing it down. A one-second time constant doesn't provide much time to take voltmeter readings!
We can increase this circuit's time constant two different ways: changing the total circuit resistance, and/or changing the total circuit capacitance. Given a pair of identical resistors and a pair of identical capacitors, experiment with various series and parallel combinations to obtain the slowest charging action. You should already know by now how multiple resistors need to be connected to form a greater total resistance, but what about capacitors? This circuit will demonstrate to you how capacitance changes with series and parallel capacitor connections. Just be sure that you insert the capacitor(s) in the proper direction: with the ends labeled negative (-) electrically "closest" to the battery's negative terminal!
The discharging circuit provides the same kind of changing capacitor voltage, except this time the voltage jumps to full battery voltage when the switch closes and slowly falls when the switch is opened. Experiment once again with different combinations of resistors and capacitors, making sure as always that the capacitor's polarity is correct.
Schematic with SPICE node numbers:
Netlist (make a text file containing the following text, verbatim):
Capacitor charging circuit
v1 1 0 dc 6
r1 1 2 1k
c1 2 0 1000u ic=0
.tran 0.1 5 uic
.plot tran v(2,0)
With BogusBus, our signals were very simple and straightforward: each signal wire (1 through 5) carried a single bit of digital data, 0 Volts representing "off" and 24 Volts DC representing "on." Because all the bits arrived at their destination simultaneously, we would call BogusBus a parallel network technology. If we were to improve the performance of BogusBus by adding binary encoding (to the transmitter end) and decoding (to the receiver end), so that more steps of resolution were available with fewer wires, it would still be a parallel network. If, however, we were to add a parallel-to-serial converter at the transmitter end and a serial-to-parallel converter at the receiver end, we would have something quite different.
It is primarily with the use of serial technology that we are forced to invent clever ways to transmit data bits. Because serial data requires us to send all data bits through the same wiring channel from transmitter to receiver, it necessitates a potentially high frequency signal on the network wiring. Consider the following illustration: a modified BogusBus system is communicating digital data in parallel, binary-encoded form. Instead of 5 discrete bits like the original BogusBus, we're sending 8 bits from transmitter to receiver. The A/D converter on the transmitter side generates a new output every second. That makes for 8 bits per second of data being sent to the receiver. For the sake of illustration, let's say that the transmitter is bouncing between an output of 10101010 and 10101011 every update (once per second):
Since only the least significant bit (Bit 1) is changing, the frequency on that wire (to ground) is only 1/2 Hertz. In fact, no matter what numbers are being generated by the A/D converter between updates, the frequency on any wire in this modified BogusBus network cannot exceed 1/2 Hertz, because that's how fast the A/D updates its digital output. 1/2 Hertz is pretty slow, and should present no problems for our network wiring.
On the other hand, if we used an 8-bit serial network, all data bits must appear on the single channel in sequence. And these bits must be output by the transmitter within the 1-second window of time between A/D converter updates. Therefore, the alternating digital output of 10101010 and 10101011 (once per second) would look something like this:
The frequency of our BogusBus signal is now approximately 4 Hertz instead of 1/2 Hertz, an eightfold increase! While 4 Hertz is still fairly slow, and does not constitute an engineering problem, you should be able to appreciate what might happen if we were transmitting 32 or 64 bits of data per update, along with the other bits necessary for parity checking and signal synchronization, at an update rate of thousands of times per second! Serial data network frequencies start to enter the radio range, and simple wires begin to act as antennas, pairs of wires as transmission lines, with all their associated quirks due to inductive and capacitive reactances.
What is worse, the signals that we're trying to communicate along a serial network are of a square-wave shape, being binary bits of information. Square waves are peculiar things, being mathematically equivalent to an infinite series of sine waves of diminishing amplitude and increasing frequency. A simple square wave at 10 kHz is actually "seen" by the capacitance and inductance of the network as a series of multiple sine-wave frequencies which extend into the hundreds of kHz at significant amplitudes. What we receive at the other end of a long 2-conductor network won't look like a clean square wave anymore, even under the best of conditions!
When engineers speak of network bandwidth, they're referring to the practical frequency limit of a network medium. In serial communication, bandwidth is a product of data volume (binary bits per transmitted "word") and data speed ("words" per second). The standard measure of network bandwidth is bits per second, or bps. An obsolete unit of bandwidth known as the baud is sometimes falsely equated with bits per second, but is actually the measure of signal level changes per second. Many serial network standards use multiple voltage or current level changes to represent a single bit, and so for these applications bps and baud are not equivalent.
The general BogusBus design, where all bits are voltages referenced to a common "ground" connection, is the worst-case situation for high-frequency square wave data communication. Everything will work well for short distances, where inductive and capacitive effects can be held to a minimum, but for long distances this method will surely be problematic:
A robust alternative to the common ground signal method is the differential voltage method, where each bit is represented by the difference of voltage between a ground-isolated pair of wires, instead of a voltage between one wire and a common ground. This tends to limit the capacitive and inductive effects imposed upon each signal and the tendency for the signals to be corrupted due to outside electrical interference, thereby significantly improving the practical distance of a serial network:
The triangular amplifier symbols represent differential amplifiers, which output a voltage signal between two wires, neither one electrically common with ground. Having eliminated any relation between the voltage signal and ground, the only significant capacitance imposed on the signal voltage is that existing between the two signal wires. Capacitance between a signal wire and a grounded conductor is of much less effect, because the capacitive path between the two signal wires via a ground connection is two capacitances in series (from signal wire #1 to ground, then from ground to signal wire #2), and series capacitance values are always less than any of the individual capacitances. Furthermore, any "noise" voltage induced between the signal wires and earth ground by an external source will be ignored, because that noise voltage will likely be induced on both signal wires in equal measure, and the receiving amplifier only responds to the differential voltage between the two signal wires, rather than the voltage between any one of them and earth ground.
RS-232C is a prime example of a ground-referenced serial network, while RS-422A is a prime example of a differential voltage serial network. RS-232C finds popular application in office environments where there is little electrical interference and wiring distances are short. RS-422A is more widely used in industrial applications where longer wiring distances and greater potential for electrical interference from AC power wiring exists.
However, a large part of the problem with digital network signals is the square-wave nature of such voltages, as was previously mentioned. If only we could avoid square waves all together, we could avoid many of their inherent difficulties in long, high-frequency networks. One way of doing this is to modulate a sine wave voltage signal with our digital data. "Modulation" means that magnitude of one signal has control over some aspect of another signal. Radio technology has incorporated modulation for decades now, in allowing an audio-frequency voltage signal to control either the amplitude (AM) or frequency (FM) of a much higher frequency "carrier" voltage, which is then send to the antenna for transmission. The frequency-modulation (FM) technique has found more use in digital networks than amplitude-modulation (AM), except that its referred to as Frequency Shift Keying (FSK). With simple FSK, sine waves of two distinct frequencies are used to represent the two binary states, 1 and 0:
Due to the practical problems of getting the low/high frequency sine waves to begin and end at the zero crossover points for any given combination of 0's and 1's, a variation of FSK called phase-continuous FSK is sometimes used, where the consecutive combination of a low/high frequency represents one binary state and the combination of a high/low frequency represents the other. This also makes for a situation where each bit, whether it be 0 or 1, takes exactly the same amount of time to transmit along the network:
With sine wave signal voltages, many of the problems encountered with square wave digital signals are minimized, although the circuitry required to modulate (and demodulate) the network signals is more complex and expensive.
A modern alternative to sending (binary) digital information via electric voltage signals is to use optical (light) signals. Electrical signals from digital circuits (high/low voltages) may be converted into discrete optical signals (light or no light) with LEDs or solid-state lasers. Likewise, light signals can be translated back into electrical form through the use of photodiodes or phototransistors for introduction into the inputs of gate circuits.
Transmitting digital information in optical form may be done in open air, simply by aiming a laser at a photodetector at a remote distance, but interference with the beam in the form of temperature inversion layers, dust, rain, fog, and other obstructions can present significant engineering problems:
One way to avoid the problems of open-air optical data transmission is to send the light pulses down an ultra-pure glass fiber. Glass fibers will "conduct" a beam of light much as a copper wire will conduct electrons, with the advantage of completely avoiding all the associated problems of inductance, capacitance, and external interference plaguing electrical signals. Optical fibers keep the light beam contained within the fiber core by a phenomenon known as total internal reflectance.
An optical fiber is composed of two layers of ultra-pure glass, each layer made of glass with a slightly different refractive index, or capacity to "bend" light. With one type of glass concentrically layered around a central glass core, light introduced into the central core cannot escape outside the fiber, but is confined to travel within the core:
These layers of glass are very thin, the outer "cladding" typically 125 microns (1 micron = 1 millionth of a meter, or 10-6 meter) in diameter. This thinness gives the fiber considerable flexibility. To protect the fiber from physical damage, it is usually given a thin plastic coating, placed inside of a plastic tube, wrapped with kevlar fibers for tensile strength, and given an outer sheath of plastic similar to electrical wire insulation. Like electrical wires, optical fibers are often bundled together within the same sheath to form a single cable.
Optical fibers exceed the data-handling performance of copper wire in almost every regard. They are totally immune to electromagnetic interference and have very high bandwidths. However, they are not without certain weaknesses.
One weakness of optical fiber is a phenomenon known as microbending. This is where the fiber is bend around too small of a radius, causing light to escape the inner core, through the cladding:
Not only does microbending lead to diminished signal strength due to the lost light, but it also constitutes a security weakness in that a light sensor intentionally placed on the outside of a sharp bend could intercept digital data transmitted over the fiber.
Another problem unique to optical fiber is signal distortion due to multiple light paths, or modes, having different distances over the length of the fiber. When light is emitted by a source, the photons (light particles) do not all travel the exact same path. This fact is patently obvious in any source of light not conforming to a straight beam, but is true even in devices such as lasers. If the optical fiber core is large enough in diameter, it will support multiple pathways for photons to travel, each of these pathways having a slightly different length from one end of the fiber to the other. This type of optical fiber is called multimode fiber:
A light pulse emitted by the LED taking a shorter path through the fiber will arrive at the detector sooner than light pulses taking longer paths. The result is distortion of the square-wave's rising and falling edges, called pulse stretching. This problem becomes worse as the overall fiber length is increased:
However, if the fiber core is made small enough (around 5 microns in diameter), light modes are restricted to a single pathway with one length. Fiber so designed to permit only a single mode of light is known as single-mode fiber. Because single-mode fiber escapes the problem of pulse stretching experienced in long cables, it is the fiber of choice for long-distance (several miles or more) networks. The drawback, of course, is that with only one mode of light, single-mode fibers do not conduct as as much light as multimode fibers. Over long distances, this exacerbates the need for "repeater" units to boost light power.
If we want to connect two digital devices with a network, we would have a kind of network known as "point-to-point:"
For the sake of simplicity, the network wiring is symbolized as a single line between the two devices. In actuality, it may be a twisted pair of wires, a coaxial cable, an optical fiber, or even a seven-conductor BogusBus. Right now, we're merely focusing on the "shape" of the network, technically known as its topology.
If we want to include more devices (sometimes called nodes) on this network, we have several options of network configuration to choose from:
Many network standards dictate the type of topology which is used, while others are more versatile. Ethernet, for example, is commonly implemented in a "bus" topology but can also be implemented in a "star" or "ring" topology with the appropriate interconnecting equipment. Other networks, such as RS-232C, are almost exclusively point-to-point; and token ring (as you might have guessed) is implemented solely in a ring topology.
Different topologies have different pros and cons associated with them:
Quite obviously the only choice for two nodes.
Very simple to install and maintain. Nodes can be easily added or removed with minimal wiring changes. On the other hand, the one bus network must handle all communication signals from all nodes. This is known as broadcast networking, and is analogous to a group of people talking to each other over a single telephone connection, where only one person can talk at a time (limiting data exchange rates), and everyone can hear everyone else when they talk (which can be a data security issue). Also, a break in the bus wiring can lead to nodes being isolated in groups.
With devices known as "gateways" at branching points in the network, data flow can be restricted between nodes, allowing for private communication between specific groups of nodes. This addresses some of the speed and security issues of the simple bus topology. However, those branches could easily be cut off from the rest of the "star" network if one of the gateways were to fail. Can also be implemented with "switches" to connect individual nodes to a larger network on demand. Such a switched network is similar to the standard telephone system.
This topology provides the best reliability with the least amount of wiring. Since each node has two connection points to the ring, a single break in any part of the ring doesn't affect the integrity of the network. The devices, however, must be designed with this topology in mind. Also, the network must be interrupted to install or remove nodes. As with bus topology, ring networks are broadcast by nature.
As you might suspect, two or more ring topologies may be combined to give the "best of both worlds" in a particular application. Quite often, industrial networks end up in this fashion over time, simply from engineers and technicians joining multiple networks together for the benefit of plant-wide information access.
Aside from the issues of the physical network (signal types and voltage levels, connector pinouts, cabling, topology, etc.), there needs to be a standardized way in which communication is arbitrated between multiple nodes in a network, even if its as simple as a two-node, point-to-point system. When a node "talks" on the network, it is generating a signal on the network wiring, be it high and low DC voltage levels, some kind of modulated AC carrier wave signal, or even pulses of light in a fiber. Nodes that "listen" are simply measuring that applied signal on the network (from the transmitting node) and passively monitoring it. If two or more nodes "talk" at the same time, however, their output signals may clash (imagine two logic gates trying to apply opposite signal voltages to a single line on a bus!), corrupting the transmitted data.
The standardized method by which nodes are allowed to transmit to the bus or network wiring is called a protocol. There are many different protocols for arbitrating the use of a common network between multiple nodes, and I'll cover just a few here. However, its good to be aware of these few, and to understand why some work better for some purposes than others. Usually, a specific protocol is associated with a standardized type of network. This is merely another "layer" to the set of standards which are specified under the titles of various networks.
The International Standards Organization (ISO) has specified a general architecture of network specifications in their DIS7498 model (applicable to most any digital network). Consisting of seven "layers," this outline attempts to categorize all levels of abstraction necessary to communicate digital data.
Some established network protocols only cover one or a few of the DIS7498 levels. For example, the widely used RS-232C serial communications protocol really only addresses the first ("physical") layer of this seven-layer model. Other protocols, such as the X-windows graphical client/server system developed at MIT for distributed graphic-user-interface computer systems, cover all seven layers.
Different protocols may use the same physical layer standard. An example of this is the RS-422A and RS-485 protocols, both of which use the same differential-voltage transmitter and receiver circuitry, using the same voltage levels to denote binary 1's and 0's. On a physical level, these two communication protocols are identical. However, on a more abstract level the protocols are different: RS-422A is point-to-point only, while RS-485 supports a bus topology "multidrop" with up to 32 addressable nodes.
Perhaps the simplest type of protocol is the one where there is only one transmitter, and all the other nodes are merely receivers. Such is the case for BogusBus, where a single transmitter generates the voltage signals impressed on the network wiring, and one or more receiver units (with 5 lamps each) light up in accord with the transmitter's output. This is always the case with a simplex network: there's only one talker, and everyone else listens!
When we have multiple transmitting nodes, we must orchestrate their transmissions in such a way that they don't conflict with one another. Nodes shouldn't be allowed to talk when another node is talking, so we give each node the ability to "listen" and to refrain from talking until the network is silent. This basic approach is called Carrier Sense Multiple Access (CSMA), and there exists a few variations on this theme. Please note that CSMA is not a standardized protocol in itself, but rather a methodology that certain protocols follow.
One variation is to simply let any node begin to talk as soon as the network is silent. This is analogous to a group of people meeting at a round table: anyone has the ability to start talking, so long as they don't interrupt anyone else. As soon as the last person stops talking, the next person waiting to talk will begin. So, what happens when two or more people start talking at once? In a network, the simultaneous transmission of two or more nodes is called a collision. With CSMA/CD (CSMA/Collision Detection), the nodes that collide simply reset themselves with a random delay timer circuit, and the first one to finish its time delay tries to talk again. This is the basic protocol for the popular Ethernet network.
Another variation of CSMA is CSMA/BA (CSMA/Bitwise Arbitration), where colliding nodes refer to pre-set priority numbers which dictate which one has permission to speak first. In other words, each node has a "rank" which settles any dispute over who gets to start talking first after a collision occurs, much like a group of people where dignitaries and common citizens are mixed. If a collision occurs, the dignitary is generally allowed to speak first and the common person waits afterward.
In either of the two examples above (CSMA/CD and CSMA/BA), we assumed that any node could initiate a conversation so long as the network was silent. This is referred to as the "unsolicited" mode of communication. There is a variation called "solicited" mode for either CSMA/CD or CSMA/BA where the initial transmission is only allowed to occur when a designated master node requests (solicits) a reply. Collision detection (CD) or bitwise arbitration (BA) applies only to post-collision arbitration as multiple nodes respond to the master device's request.
An entirely different strategy for node communication is the Master/Slave protocol, where a single master device allots time slots for all the other nodes on the network to transmit, and schedules these time slots so that multiple nodes cannot collide. The master device addresses each node by name, one at a time, letting that node talk for a certain amount of time. When it is finished, the master addresses the next node, and so on, and so on.
Yet another strategy is the Token-Passing protocol, where each node gets a turn to talk (one at a time), and then grants permission for the next node to talk when its done. Permission to talk is passed around from node to node as each one hands off the "token" to the next in sequential order. The token itself is not a physical thing: it is a series of binary 1's and 0's broadcast on the network, carrying a specific address of the next node permitted to talk. Although token-passing protocol is often associated with ring-topology networks, it is not restricted to any topology in particular. And when this protocol is implemented in a ring network, the sequence of token passing does not have to follow the physical connection sequence of the ring.
Just as with topologies, multiple protocols may be joined together over different segments of a heterogeneous network, for maximum benefit. For instance, a dedicated Master/Slave network connecting instruments together on the manufacturing plant floor may be linked through a gateway device to an Ethernet network which links multiple desktop computer workstations together, one of those computer workstations acting as a gateway to link the data to an FDDI fiber network back to the plant's mainframe computer. Each network type, topology, and protocol serves different needs and applications best, but through gateway devices, they can all share the same data.
It is also possible to blend multiple protocol strategies into a new hybrid within a single network type. Such is the case for Foundation Fieldbus, which combines Master/Slave with a form of token-passing. A Link Active Scheduler (LAS) device sends scheduled "Compel Data" (CD) commands to query slave devices on the Fieldbus for time-critical information. In this regard, Fieldbus is a Master/Slave protocol. However, when there's time between CD queries, the LAS sends out "tokens" to each of the other devices on the Fieldbus, one at a time, giving them opportunity to transmit any unscheduled data. When those devices are done transmitting their information, they return the token back to the LAS. The LAS also probes for new devices on the Fieldbus with a "Probe Node" (PN) message, which is expected to produce a "Probe Response" (PR) back to the LAS. The responses of devices back to the LAS, whether by PR message or returned token, dictate their standing on a "Live List" database which the LAS maintains. Proper operation of the LAS device is absolutely critical to the functioning of the Fieldbus, so there are provisions for redundant LAS operation by assigning "Link Master" status to some of the nodes, empowering them to become alternate Link Active Schedulers if the operating LAS fails.
Other data communications protocols exist, but these are the most popular. I had the opportunity to work on an old (circa 1975) industrial control system made by Honeywell where a master device called the Highway Traffic Director, or HTD, arbitrated all network communications. What made this network interesting is that the signal sent from the HTD to all slave devices for permitting transmission was not communicated on the network wiring itself, but rather on sets of individual twisted-pair cables connecting the HTD with each slave device. Devices on the network were then divided into two categories: those nodes connected to the HTD which were allowed to initiate transmission, and those nodes not connected to the HTD which could only transmit in response to a query sent by one of the former nodes. Primitive and slow are the only fitting adjectives for this communication network scheme, but it functioned adequately for its time.
Roboticists develop man-made mechanical devices that can move by themselves, whose motion must be modelled, planned, sensed, actuated and controlled, and whose motion behaviour can be influenced by “programming”. Robots are called “intelligent” if they succeed in moving in safe interaction with an unstructured environment, while autonomously achieving their specified tasks.
This definition implies that a device can only be called a “robot” if it contains a movable mechanism, influenced by sensing, planning, actuation and control components. It does not imply that a minimum number of these components must be implemented in software, or be changeable by the “consumer” who uses the device; for example, the motion behaviour can have been hard-wired into the device by the manufacturer.
So, the presented definition, as well as the rest of the material in this part of the WEBook, covers not just “pure” robotics or only “intelligent” robots, but rather the somewhat broader domain of robotics and automation. This includes “dumb” robots such as: metal and woodworking machines, “intelligent” washing machines, dish washers and pool cleaning robots, etc. These examples all have sensing, planning and control, but often not in individually separated components. For example, the sensing and planning behaviour of the pool cleaning robot have been integrated into the mechanical design of the device, by the intelligence of the human developer.
Robotics is, to a very large extent, all about system integration, achieving a task by an actuated mechanical device, via an “intelligent” integration of components, many of which it shares with other domains, such as systems and control, computer science, character animation, machine design, computer vision, artificial intelligence, cognitive science, biomechanics, etc. In addition, the boundaries of robotics cannot be clearly defined, since also its “core” ideas, concepts and algorithms are being applied in an ever increasing number of “external” applications, and, vice versa, core technology from other domains (vision, biology, cognitive science or biomechanics, for example) are becoming crucial components in more and more modern robotic systems.
This part of the WEBook makes an effort to define what exactly is that above-mentioned core material of the robotics domain, and to describe it in a consistent and motivated structure. Nevertheless, this chosen structure is only one of the many possible “views” that one can want to have on the robotics domain.
In the same vein, the above-mentioned “definition” of robotics is not meant to be definitive or final, and it is only used as a rough framework to structure the various chapters of the WEBook. (A later phase in the WEBook development will allow different “semantic views” on the WEBook material.)
Components of robotic systems
This figure depicts the components that are part of all robotic systems. The purpose of this Section is to describe the semantics of the terminology used to classify the chapters in the WEBook: “sensing”, “planning”, “modelling”, “control”, etc.
The real robot is some mechanical device (“mechanism”) that moves around in the environment, and, in doing so, physically interacts with this environment. This interaction involves the exchange of physical energy, in some form or another. Both the robot mechanism and the environment can be the “cause” of the physical interaction through “Actuation”, or experience the “effect” of the interaction, which can be measured through “Sensing”.
Robotics as an integrated system of control interacting with the physical world.
Sensing and actuation are the physical ports through which the “Controller” of the robot determines the interaction of its mechanical body with the physical world. As mentioned already before, the controller can, in one extreme, consist of software only, but in the other extreme everything can also be implemented in hardware.
Within the Controller component, several sub-activities are often identified:
Modelling. The input-output relationships of all control components can (but need not) be derived from information that is stored in a model. This model can have many forms: analytical formulas, empirical look-up tables, fuzzy rules, neural networks, etc.
The name “model” often gives rise to heated discussions among different research “schools”, and the WEBook is not interested in taking a stance in this debate: within the WEBook, “model” is to be understood with its minimal semantics: “any information that is used to determine or influence the input-output relationships of components in the Controller.”
The other components discussed below can all have models inside. A “System model” can be used to tie multiple components together, but it is clear that not all robots use a System model. The “Sensing model” and “Actuation model” contain the information with which to transform raw physical data into task-dependent information for the controller, and vice versa.
Planning. This is the activity that predicts the outcome of potential actions, and selects the “best” one. Almost by definition, planning can only be done on the basis of some sort of model.
Regulation. This component processes the outputs of the sensing and planning components, to generate an actuation setpoint. Again, this regulation activity could or could not rely on some sort of (system) model.
The term “control” is often used instead of “regulation”, but it is impossible to clearly identify the domains that use one term or the other. The meaning used in the WEBook will be clear from the context.
Scales in robotic systems
The above-mentioned “components” description of a robotic system is to be complemented by a “scale” description, i.e., the following system scales have a large influence on the specific content of the planning, sensing, modelling and control components at one particular scale, and hence also on the corresponding sections of the WEBook.
Mechanical scale. The physical volume of the robot determines to a large extent the limites of what can be done with it. Roughly speaking, a large-scale robot (such as an autonomous container crane or a space shuttle) has different capabilities and control problems than a macro robot (such as an industrial robot arm), a desktop robot (such as those “sumo” robots popular with hobbyists), or milli micro or nano robots.
Spatial scale. There are large differences between robots that act in 1D, 2D, 3D, or 6D (three positions and three orientations).
Time scale. There are large differences between robots that must react within hours, seconds, milliseconds, or microseconds.
Power density scale. A robot must be actuated in order to move, but actuators need space as well as energy, so the ratio between both determines some capabilities of the robot.
System complexity scale. The complexity of a robot system increases with the number of interactions between independent sub-systems, and the control components must adapt to this complexity.
Computational complexity scale. Robot controllers are inevitably running on real-world computing hardware, so they are constrained by the available number of computations, the available communication bandwidth, and the available memory storage.
Obviously, these scale parameters never apply completely independently to the same system. For example, a system that must react at microseconds time scale can not be of macro mechanical scale or involve a high number of communication interactions with subsystems.
Finally, no description of even scientific material is ever fully objective or context-free, in the sense that it is very difficult for contributors to the WEBook to “forget” their background when writing their contribution. In this respect, robotics has, roughly speaking, two faces: (i) the mathematical and engineering face, which is quite “standardized” in the sense that a large consensus exists about the tools and theories to use (“systems theory”), and (ii) the AI face, which is rather poorly standardized, not because of a lack of interest or research efforts, but because of the inherent complexity of “intelligent behaviour.” The terminology and systems-thinking of both backgrounds are significantly different, hence the WEBook will accomodate sections on the same material but written from various perspectives. This is not a “bug”, but a “feature”: having the different views in the context of the same WEBook can only lead to a better mutual understanding and respect.
Research in engineering robotics follows the bottom-up approach: existing and working systems are extended and made more versatile. Research in artificial intelligence robotics is top-down: assuming that a set of low-level primitives is available, how could one apply them in order to increase the “intelligence” of a system. The border between both approaches shifts continuously, as more and more “intelligence” is cast into algorithmic, system-theoretic form. For example, the response of a robot to sensor input was considered “intelligent behaviour” in the late seventies and even early eighties. Hence, it belonged to A.I. Later it was shown that many sensor-based tasks such as surface following or visual tracking could be formulated as control problems with algorithmic solutions. From then on, they did not belong to A.I. any more.
Robotics - Types of Robots
Ask a number of people to describe a robot and most of them will answer they look like a human. Interestingly a robot that looks like a human is probably the most difficult robot to make. Is is usually a waste of time and not the most sensible thing to model a robot after a human being. A robot needs to be above all functional and designed with qualities that suits its primary tasks. It depends on the task at hand whether the robot is big, small, able to move or nailed to the ground. Each and every task means different qualities, form and function, a robot needs to be designed with the task in mind.
Mars Explorer images and other space robot images courtesy of NASA.
Mobile robots are able to move, usually they perform task such as search areas. A prime example is the Mars Explorer, specifically designed to roam the mars surface.
Mobile robots are a great help to such collapsed building for survivors Mobile robots are used for task where people cannot go. Either because it is too dangerous of because people cannot reach the area that needs to be searched.
Mobile robots can be divided in two categories:
Rolling Robots: Rolling robots have wheels to move around. These are the type of robots that can quickly and easily search move around. However they are only useful in flat areas, rocky terrains give them a hard time. Flat terrains are their territory.
Walking Robots: Robots on legs are usually brought in when the terrain is rocky and difficult to enter with wheels. Robots have a hard time shifting balance and keep them from tumbling. That’s why most robots with have at least 4 of them, usually they have 6 legs or more. Even when they lift one or more legs they still keep their balance. Development of legged robots is often modeled after insects or crawfish..
Robots are not only used to explore areas or imitate a human being. Most robots perform repeating tasks without ever moving an inch. Most robots are ‘working’ in industry settings. Especially dull and repeating tasks are suitable for robots. A robot never grows tired, it will perform its duty day and night without ever complaining. In case the tasks at hand are done, the robots will be reprogrammed to perform other tasks..
Autonomous robots are self supporting or in other words self contained. In a way they rely on their own ‘brains’.
Autonomous robots run a program that give them the opportunity to decide on the action to perform depending on their surroundings. At times these robots even learn new behavior. They start out with a short routine and adapt this routine to be more successful at the task they perform. The most successful routine will be repeated as such their behavior is shaped. Autonomous robots can learn to walk or avoid obstacles they find in their way. Think about a six legged robot, at first the legs move ad random, after a little while the robot adjust its program and performs a pattern which enables it to move in a direction.
An autonomous robot is despite its autonomous not a very clever or intelligent unit. The memory and brain capacity is usually limited, an autonomous robot can be compared to an insect in that respect.
In case a robot needs to perform more complicated yet undetermined tasks an autonomous robot is not the right choice.
Complicated tasks are still best performed by human beings with real brainpower. A person can guide a robot by remote control. A person can perform difficult and usually dangerous tasks without being at the spot where the tasks are performed. To detonate a bomb it is safer to send the robot to the danger area.
BEAM is short for Biology, Electronics, Aesthetics and Mechanics. BEAM robots are made by hobbyists. BEAM robots can be simple and very suitable for starters.
Robots are often modeled after nature. A lot of BEAM robots look remarkably like insects. Insects are easy to build in mechanical form. Not just the mechanics are in inspiration also the limited behavior can easily be programmed in a limited amount of memory and processing power.
Like all robots they also contain electronics. Without electronic circuits the engines cannot be controlled. Lots of Beam Robots also use solar power as their main source of energy.
A BEAM Robot should look nice and attractive. BEAM robots have no printed circuits with some parts but an appealing and original appearance.
In contrast with expensive big robots BEAM robots are cheap, simple, built out of recycled material and running on solar energy.
Robotics - Robotics Technology
Most industrial robots have at least the following five parts:
This section discusses the basic technologies of a robot. Click one of the links above or use the navigation bar menu on the far right.
Robotics - Robotics Planning
A motion of the form move along path until sensory-condition.
A motion of the form move along direction d with force = 0 perpendicularly to d. More intuitively, moving along a surface while maintaining a fixed pressure, such as when scraping paint off of a window with a razor.
A device that converts software commands into physical motion, typically electric motors or hydraulic or pneumatic cylinders.
Sensing system that works by measuring the time of flight of a sound pulse to be generated, reach an object, and be reflected back to the sensor. Wide angle but reasonably accurate in depth (the wide angle is the disadvantage).
Very accurate angular resolution system but terrible in depth measurement.
An easily recognizable, unique element of the environment that the robot can use to get its bearings.
continuous -- states and actions are drawn from a continuum of physical configurations and motions
Task planning is divided into three phases: modeling, task specification, and manipulator program synthesis.
There are three approaches to specifying the model state:
Using a CAD system to draw the positions of the objects in the desired configuration.
Using symbolic spatial relationships between object features (such as (face1 against face2). This is the most common method, but must be converted into numerical form to be used.
One problem is that these configurations may overconstrain the state. Symmetry is an example; it does not matter what the orientation of a peg in a hole is. The final state may also not completely specify the operation; for example, it may not say how hard to tighten a bolt.
A specification of the position of every point in the object, relative to a fixed frame of reference. To specify the configuration of a rigid object A, it is enough to specify the position and orientation of the frame FA with respect to FW. The subset of W occupied by A at configuration q is denoted by A(q).
The dimension of a configuration space is the number of independent parameters required to represent it as Rm. This is 3 for 2-D, and 6 for 3-D.
A representation of a local portion of the configuration space. C can be decomposed into a finite union of slightly overlapping patches called charts, each represented as a copy of Rm.
Motions of material bodies under the action of forces.
Rotation around a fixed hub.
Linear movement, as with a piston in a cylinder.
To be complete for motion planning, skeletonization methods must satisfy two properties:
The kinds of skeletonizations are visibility graphs, Voronoi diagrams and roadmaps, which consist of silhouette curves and linking curves. (This formulation due to Russell and Norvig; note that Latombe calls all of these things "roadmap methods.")
A roadmap method based on retraction. A continuous function of Cfree is defined onto a one-dimensional subset of itself. The Voronoi diagram consists of those curves in the space that are equidistant from two or more obstacles; these curves form the skeleton.
A global approach to motion planning. The intuitive idea is to break the space down into a finite number of discrete chunks.
Potential fields are very efficient but suffer from local minima. Two approaches overcome this:
design potential functions with no local minima other than the goal configuration
complement the basic potential field approach with mechanisms to escape from local minima
The term generalized configuration space is used to describe systems in which other objects are included as part of the configuration. These may be movable, and their shapes may vary.
Partition the generalized configuration space into finitely many states. The planning problem then becomes a logical one, like the blocks world. No general method for partitioning space has yet been found.
Restrict object motions to simplify planning.
The second and third extensions yield configuration spaces of arbitrarily large dimensions. Planning can be done in a composite configuration space which is the cross-product of the individual configuration spaces (this is called centralized planning), or another method called decoupled planning can be used to plan the motions more or less independently and interactions are only considered in the second phase of planning.
There are three principal considerations in gripping an object. They are:
stability -- the grasp should be stable in the presence of forces exerted on the grasped object during transfer and parts-mating motions
These theorems are taken from Latombe's book (see Sources).
Theorem 2 (Joints) In the absence of obstacles, deciding whether a planar linkage in some initial configuration can be moved so that a certain joint reaches a given point in the plane is PSPACE-hard.
Theorem 6 (Velocity-bounding) Planning the motion of a rigid object translating without rotation in three dimensions among arbitrarily many moving obstacles that may both translate and rotate is a PSPACE-hard problem if the velocity modulus of the object is bounded, and an NP-hard problem otherwise.
It is possible to approximate the real problem before giving it to the motion planner; it is reasonable to trade generality for improved time performance.
Simplification heuristics make only local plans, by breaking the problem into subproblems. For example, many localities are stereotyped situations, such as moving through a door or turning in a corridor.
This approach breaks the problem down into task-achieving behaviors (such as wandering, avoiding obstacles, or making maps) rather than decomposing it functionally (into sensing, planning, acting).
In subsumption architectures, levels of competence are stacked one on top of another, ranging from the lowest level (object avoidance) to higher levels for planning and map-making. Higher levels may interfere with lower levels, but each level's architecture is built, tested and completed before the next level is added. In this way the system is robust and incrementally more powerful.
Individual levels consist of augmented finite state machines connected by message-passing wires. Higher levels may inhibit signals on these wires, or replace them with their own signals; this is how they exercise control over more basic functions.
(For more information, see the AI Qual Summary on Agent Architectures.)
After each action was executed, PLANES would execute the shortest plan subsequence that led to a goal and whose preconditions were satisfied. In this way, actions that failed would be retried and serendipity would lead to reduced effort. If no subsequence applied, PLANEX called STRIPS to make a new plan.
This compilation approach distinguishes between the use of explicit knowledge representation by the designers (the "grand strategy") and the use of explicit knowledge within the agent architecture (the "grand tactic"). Rosenschein's compiler generates FSMs that can be proved to correspond to logical propositions about the environment, provided the compiler has the correct initial state and physics.
The FLAKEY system at SRI used situated automata to navigate, run errands, and ask questions, and had no explicit representation.
Russell and Norvig, Artificial Intelligence: A Modern Approach, Chapter 25.
Latombe, Chapter 1.
See the AI Qual Reading List for further details on these sources.
Robotics - Robotics Tutorials
Everything you need to know to get into robotics is here, from motors to programming, the tutorials are organised by difficulty or subject matter. See the links above to pick a section that interests you.