Showing posts with label lab view. Show all posts
Showing posts with label lab view. Show all posts

Saturday, 16 December 2017

Semiconductor Testing


http://www.readydaq.com/content/blog/semiconductor-testing

Automated test equipment (ATE) is computer-controlled test and measurement equipment that allows for testing with minimal human interaction. The tested devices are referred to as a device under test (DUT). The advantages of this kind of testing include reducing testing time, repeatability, and cost efficiency in high volume. The chief disadvantages are the upfront costs of programming and setup.
Automated test equipment can test printed circuit boards, interconnections, and verifications. They are commonly used in wireless communication and radar. Simple ATEs include volt-ohm meters that measure resistance and voltages in PCs; complex ATE systems have several mechanisms that automatically run high-level electronic diagnostics.
ATE is used to quickly confirm whether a DUT works and to find defects. When the first out-of-tolerance value is detected, the testing stops and the device fails.

Semiconductor Testing

For ATEs that test semiconductors, the architecture consists of a master controller (a computer) that synchronizes one or more sources and capture instruments, such as an industrial PC or mass interconnect. The DUT is physically connected to the ATE by a machine called a handler, or prober, and through a customized Interface Test Adapter (ITA) that adapts the ATE's resources to the DUT.
When testing packaged parts or directly on the silicon wafer, a handler is used to place the device on a customized interface board and silicon wafers are tested directly with high precision probes.

Test Types

Logic Testing

Logic test systems are designed to test microprocessors, gate arrays, ASICs and other logic devices.
Linear or mixed signal equipment tests components such as analog-to-digital converters (ADCs), digital-to-analog converters (DACs), comparators, track-and-hold amplifiers, and video products. These components incorporate features such as, audio interfaces, signal processing functions, and high-speed transceivers.
Passive component ATEs test passive components including capacitors, resistors, inductors, etc. Typically, testing is done by the application of a test current.
Discrete ATEs test active components including transistors, diodes, MOSFETs, regulators, TRIACS, Zeners, SCRs, and JFETs.

Printed Circuit Board Testing

Printed circuit board testers include manufacturing defect analyzers, in-circuit testers, and functional analyzers.
Automated Test Equipment imageManufacturing defect analyzers (MDAs) detect manufacturing defects, such as shorts and missing components, but can't test digital ICs as they test with the DUT powered down (cold). As a result, they assume the ICs are functional. MDAs are much less expensive than other test options and are also referred to as analog circuit testers.
In-circuit analyzers test components that are part of a board assembly. The components under test are "in a circuit." The DUT is powered up (hot). In-circuit testers are very powerful but are limited due to the high density of tracks and components in most current designs. The pins for contact must be placed very accurately in order to make good contact. They are also referred to as digital circuit testers or ICT.
A functional test simulates an operating environment and tests a board against its functional specification. Functional automatic test equipment (FATE) unpopular due to the equipment not being able to keep up with the increasing speed of boards. This causes a lag between the board under test and the manufacturing process. There are several types of functional test equipment and they may also be referred to as emulators.

Interconnection and Verification Testing

Test types for interconnection and verification include cable and harness testers and bare-board testers.
Cable and harness testers are used to detect opens (missing connections), shorts (open connections) and miswires (wrong pins) on cable harnesses, distribution panels, wiring looms, flexible circuits, and membrane switch panels with commonly-used connector configurations. Other tests performed by automated test equipment include resistance and hipot tests.
Bare board automated test equipment is used to detect the completeness of a PCB circuit before assembly and wave solder.

Wednesday, 6 December 2017

Exploiting LabVIEW Libraries


labview expert
Have you ever viewed a LabVIEW VI Hierarchy and become frustrated with not being able to locate a VI you needed to open?
Do you have large applications composed of similar modules but fear to jump, with both feet, into the learning curve of LVOOP?
Did you ever try to duplicate a sub-VI at the start of a new set of functions and find yourself deep in a nest of cross-linked VIs, or save a VI only to realize that the most suitable name has already been used?
Then using LabVIEW Libraries may be useful to you
Libraries are a feature available in the LabVIEW project or they can be created stand-alone*. They have a number of features that allow you to specify shared properties and attributes of related VIs and custom controls.
In short, many of the features of LVOOP are available without the complications required for Dynamic Dispatching. The remainder of this document will serve as a tutorial that demonstrates how to create, define, and clone a library. Additional notes are included to illustrate how these features can be exploited to help you develop more robust applications that are easier to support than applications that do not use libraries.
*Libraries can be created stand-alone from the LabVIEW splash screen using the method:
File >>> New … >>> Other Files >>> Library
You can create a new library from the project by right-clicking the “My Computer” icon and selecting “New >>> Library”. Save it to a unique folder that will contain all of the files associated with the library.
Open the properties screen and then open the icon editor) to compose a Common Icon for the library and its members.
Take a little time to create the icon because it will be shared by all of the members of the library. Do not get carried away and fill-up the entire icon. Leave some white space so that the icons of the component VIs can be customized to illustrate their role in the functionality of the library.
Create virtual folders in the library to help organize the VIs contained in it. I usually use three folders but you can use more or less depending on your needs and preferences. I use one to hold the controls, and another pair for the public and private VIs. I do not use auto-populating folders for a number of reasons.
I can control which VIs are included and which are not. Occasionally temporary VIs are created to do some basic testing and they are never intended to be part of the library. If functionality changes and the temporary VI breaks due to the change, the library may cause a build to fail due to the broken VI.
I can easily move a VI from private to public without having to move the VI on disk and then properly updating source code control to reflect the change.
I can keep the file paths shorter using the virtual folders while maintaining the structure of the project.
Additional virtual folders can be added if you want to further break-down the organization of the VIs in the library. If developing a library that will be used by other developers and or be as a tool for others, you may want to include a folder for the VIs that define the API your library offers. The API can also be divided into additional virtual folders to break-down the interface into functional areas if you wish. Implement the Logical Grouping of sub-VIs as needed for your library.
Set the Access Scope of the private virtual folder to private. While the private folder and the setting of the access scope can be optional, taking advantage of this options will help you and the users of your library identify which VIs are not intended for use outside of the library. Attempting to use a VI with a private scope from outside the library itself will break the calling VI and make it very obvious that the VI is not intended for public use.
Developing applications using libraries differs little from developing without libraries with one exception; there is no additional work to use them. The exception is illustrated in Figure 8 where the name of the VI is highlighted. While the VI named in the project is shown as “Init_AI.vi” the actual name of the VI is “DAQ.lvib:AI.lvlib:Init_AI.vi”. The difference is the result of what is called “Name Mangling”. The actual name of the VI is prefixed by the library names that own the VI. This is a powerful feature that goes a long way toward avoiding cross-linking and will let us easily clone a library to be used as the starting point of a similar library.
The Save as the screen for the library will not only let us define the library name but also where in the project the library will be placed. This is handy for nested libraries but not critical. The libraries can be moved around in the project or between libraries as need using the project window. When a library is cloned using the Save As an option, all of the VIs contained in the original library are duplicated and re-linked to the VIs in the new library. There is NO chance of cross-linking when Cloning a library!
Libraries can help in all phases of an application from initial development to long-term support through to knowledge transfer. Remember, “Libraries” are your friend!

LabVIEW Improvements


labview developers

LabVIEW passed its 30 year anniversary in 2016,  and six months ago, National Instruments, has launched a considerably updated version of LabVIEW - its Next Generation LabVIEW NXG 1.0.
LabVIEW NXG is a totally reworked version of LabVIEW and this enables it to offer a considerably improved level of performance. By adopting an approach where LabVIEW has been started again from the ground up, LabVIEW NXG enables users to see significant improvements in performance as a result of the new code.
LabVIEW NXG offers some significant definitive improvements over the previous implementation of LabVIEW:
  • Plug & Play: a lot of work has gone into enabling LabVIEW NXG to provide easy set-up of hardware connections. It has true plug and play functionality.
  • IDE: The LabVIEW NXG environment has been totally overhauled to take elements of popular commercial software and replicate the attributes of the environment to make it more intuitive.
  • Tutorials: To facilitate the speedy uptake of newcomers to LabVIEW, the new LabVIEW NXG has inbuilt walk-throughs and other integrated learning facilities. This has been shown to greatly speed up the time which it takes for newcomers to be able to proficiently programme in LabVIEW. It is even possible to undertake a number of standard tasks without “hitting the code.”
National Instruments will be running both the traditional LabVIEW, i.e. LabVIEW 2017 which has also been launched alongside the new next-generation LabVIEW NXG, but ultimately when total compatibility has been established the two will converge enabling users to benefit from the new streamlined core.
Users of LabVIEW will be given access to both LabVIEW 2017 and later versions as well as LabVIEW NXG. In this way, they can make the choice of which version suits their application best.
National Instruments spokespeople stressed that the traditional development line of LabVIEW will continue to be maintained so that the large investment in software and applications that users have is not at risk. However, drivers and many other areas are already compatible with both lines.
“Thirty years ago, we released the original version of LabVIEW, designed to help engineers automate their measurement systems without having to learn the esoterica of traditional programming languages. LabVIEW was the ‘nonprogramming’ way to automate a measurement system,” said Jeff Kodosky, NI co-founder and business and technology fellow, known as the ‘Father of LabVIEW.’
“For a long time, we focused on making additional things possible with LabVIEW, rather than furthering the goal of helping engineers automate measurements quickly and easily. Now we are squarely addressing this with the introduction of LabVIEW NXG, which we designed from the ground up to embrace a streamlined workflow. Common applications can use a simple configuration-based approach, while more complex applications can use the full open-ended graphical programming capability of the LabVIEW language, G.”

Monday, 20 November 2017

9 Things to Consider When Choosing Automated Test Equipment


automation

Automated test equipment (ATE) have the ability to reduce the costs of testing and make sure that lab teams can focus on other, more important tasks. With ATE, productivity, and efficiency is boosted to a maximum level due to cutting out the unnecessary tasks and daily activities.
However, you should not just cash out and invest in automated test equipment, there are elements that factors that are important to find the system that suits you best. Our team at ReadyDAQ has prepared 12 things you should consider before choosing automated test equipment.

1. Endurance and Compactness

One of the most important things is that the ATE system your company picks is designed for optimal performance over the long-term. Take a careful look at connections and components and make a conclusion whether they will survive over repeated use.Many lab teams are often struggling to find large areas for their testing operations. The automated test equipment should also be compact.

2. Customer Experience

Are other customers satisfied the support and other things they went through? Does the company you bought ATE from provide full support? You don't have to be the expert in automated test equipment, but they do. And their skills and expertise have to be available to you for when you need it. Customer support and the overall customer experience is a huge factor!

3. Scalability and Compatibility

One purchase does not have to be final. It often isn't You should check whether the equipment you ordered can be expanded or scaled over time. Your needs might change and you want ATE to adapt to your needs.
When compatibility comes to mind, we want to make sure that the equipment is built following all industry standards. Cross-compatibility is often important in situations where we no longer need or have lost the access to certain products. Better safe than sorry.

4. Comprehensive

Think of all the elements needed for testing. Even better, make a list. Does the equipment you have in mind cover ALL required elements? Don't forget about power and signaling, are they included too?

5. High Test Coverage and Diagnostics 

The ATE system has to be able to provide full coverage and give insights on all components of the processed product. This can help decrease the number of possible errors and failures later on.
How about diagnostics? Does the testing system provide robust diagnostic tools to make sure the obtained results are reliable and true?

6. Cost per Test

How much does a single test cost? You have to think and plan long-term, so a single test cost can help you calculate and make an assumption whether the system provides real value for the money invested.

7. Testimonials and Warranty 

Are other customers satisfied? Can the company direct you to testimonials from previous customers? What do their previous customers have to say about the systems and their performance?
Also, you don't want to be left hanging in case the systems starts malfunctioning or simply stops working. Does the ATE system come with a comprehensive warranty? Make sure you’re protected against damages that might happen in testing and see that the warranty covers that too.

8. Manufacturer Reputation

When did you first hear about the company? How? Did someone (besides them) say anything good about them? Is the company known for the high quality of their equipment? Discuss their past projects and learn more about the value their products provide.

9. Intuitive Performance

At first sight, is the system easy to use or way too complicated that it would require weeks of training for everyone in the lab? Does it offer intuitive performance within the testing procedure? Your team should be able to begin testing without having to go over every point in the technical manual in pinpoint detail.
Our team at ReadyDAQ is here to help you select the perfect automated test equipment for your lab.

Tuesday, 8 August 2017

Basics and Applications of Optical Sensor

professional labview expert
An optical sensor is one that converts light rays into a computerized signal. To measure a physical quantity of light and, depending on the sort of sensor, translate it into a form that is readable by some unified measuring device is the purpose of an optical sensor. Optical sensors can be both external and internal. External sensors assemble and address an appropriate quantity of light, while internal sensors measure the bends and other small changes in direction.

Types of Optical Sensors

There are various kinds of optical sensors, and here are the most common types.

Through-Beam Sensors

The usual system consists of two independent components. The receiver and the transmitter are placed opposite to each other. That transmitter projects a light beam onto the receiver. A breach of the light beam is explained as a switch signal by the receiver. It is insignificant where the interruption appears.
Its advantage is that large operating distances can be attained and the recognition is separated from the object’s surface structure, colour or reflectivity.
It must be assured that the object is sufficiently huge to interrupt the light beam completely, to ensure a high operational dependability.

Diffuse Reflection Sensors

Both receiver and transmitter are in one housing. The transmitted light is reflected by the object that must be identified.
The diffused light intensity at the receiver serves as the switching condition. Regardless of the sensitivity setting the front part regularly reflects worse than the rear part and this leads to the after effect of false switching operations.

Retro-Reflective Sensors

Here, both transmitter and receiver are in the same house. Through a reflector, the radiated light beam is conducted back to the receiver. An interruption of the light beam commences a switching operation. It is not influential where the interruption occurs.
Retro-reflective sensors set up large operating distances with switching points, which are completely reproducible demanding little escalating effort. Any object interfering the light beam is precisely detected independently of its colour or surface structure.

Wednesday, 19 July 2017

How to keep multicloud complexity under control



Using multiple cloud providers provides needed flexibility, but it also multiplies the work and risk of getting out of sync
“Multicloud” means that you use multiple public cloud providers, such as Google and Amazon Web Services, AWS and Microsoft, or all three—you get the idea. Although this seems to provide the best flexibility, there are trade-offs to consider.
The drawbacks I see at enterprise clients relate to added complexity. Dealing with multiple cloud providers does give you a choice of storage and compute solutions, but you must still deal with two or more clouds, two or more companies, two or more security systems … basically, two or more ways of doing anything. It quickly can get confusing.
For example, one client confused security systems and thus inadvertently left portions of its database open to attack. It’s like locking the back door of your house but leaving the front door wide open. In another case, storage was allocated on two clouds at once, when only one was needed. The client did not find out until a very large bill arrived at the end of the month.
Part of the problem is that public cloud providers are not built to work together. Although they won’t push back if you want to use public clouds other than their own, they don’t actively support this usage pattern. Therefore, you must come up with your own approaches, management technology, and cost accounting.
The good news is that there are ways to reduce the multicloud burden.
For one, managed services providers (MSPs) can manage your multicloud deployments for you. They provide gateways to public clouds and out-of-the-box solutions for management, cost accounting, governance, and security. They will also be happy to take your money to host your applications, as well as provide access to public cloud services.
If you lean more toward the DIY approach, you can use cloud management platforms (CMPs). These place a layer of abstraction between you and the complexity of managing multiple public clouds. As a result, you use a single mechanism to provision storage and compute, as well as for security and management no matter how many clouds you are using.
I remain a fan of the multicloud approach. But you’ll get its best advantage if you understand the added complexity up front and the ways to reduce it.

Friday, 14 July 2017

Project Management in Medical Industry


data acquisition


The medical industry has grown multifold over the last decade and the amount of innovation and development has proved the fact that with the growth of technology we can expect better and affordable solutions in the health sector with the help of innovative medical devices. For these major medical device companies, innovation leads to prototyping which is a major constituent while developing a medical device after a thorough research. It is very important to focus solely on the development of the prototype and something which hinders the process is the development of software to control the devices. For such project managers, ReadyDAQ is the one stop shop for application development needs as it offers customizability and flexibility to develop applications for your Data acquisition devices as per your needs and requirements.
 
The usual process of developing a prototype involves extensive research and study of the subject following which data needs to be acquired from sensors and operating devices such as pumps, motors, and drivers for the smooth functioning of the device. For this, a software needs to be developed individually for each concerned device but with ReadyDAQ you have the freedom to plug and play the devices without the need to develop an exclusive application from the scratch. 
Since research projects come with tight schedules and deadlines, it is the duty of the project manager to make sure that all the time and focus is being devoted to the development of the device making ReadyDAQ the perfect solution to this problem. 
ReadyDAQ supports multiple devices at a time making it easier to control, read, acquire and store signals. In the medical industry, it is very important to get precise readings for different values hence, it plays a major role in any medical innovation or RnD center where error free values are necessary to build the perfect prototype.
Built in the LabVIEW environment, this application is the perfect solution for all the project managers out there to save on time and expense. So, if you are looking to build the perfect prototype for the next big thing in the medical device industry, try ReadyDAQ. We offer a 7 day 100% money back guarantee.

Thursday, 13 July 2017

3 Reasons to Automate your Business

automation
Let's accept the fact that with every technological development that takes place, it's major focus is improved efficiency, cost cutting and better output. This is the major reason we are focused on the implementation of latest solutions for our businesses. An increasingly popular term that we come across is automation. And rightly so, it is the thing of the future which is slowly connecting all the aspects of industrialization. The process of automation, although a long one, can be easily implemented using the perfect blend of software and hardware. But what is it that makes automation our priority? Let's have a look at the three most important reasons behind this transition.
Filling the gap between supply and demand: We have to agree that with the ever increasing population, all the industries are always under the pressure of fulfilling high demand numbers. To tackle this problem, automation is an absolute necessity since it has helped increase the output multifold. This increase in the produce has also led to lesser wastage and optimum efficiency.
 Accuracy: Okay, let's just accept the fact that machine made material is better and precise when compared to the human hand. While more and more industries are making the shift to the automation technology, it is to be noted that their output has increased when compared to human support.
Cost cutting leads to increased efficiency: An automatic machine equals a hundred men. Well, even though this number might be accurate it is safe to say that a machine can give output which equals a lot of manpower. This not only saves money due to less investment in terms of salaries but also saves production time. Testing is easier and simplifies the production process.
So when we look at these factors, we realize how important automation actually is. But, as we mentioned before, complementing software is very important for such hardware and that is where ReadyDAQ jumps in. High end machinery makes use of a lot of operational devices and so ReadyDAQ offers a development solution for all its software needs without actually having to start from the scratch. Supporting simultaneous operation of multiple devices, it is the perfect solution for all industries trying to implement automation and it's components. So, what are you waiting for? Download the 30- day trial version today and get a feel of the product before making that purchase!

Automation to Replace Human Hands?


automation

As we move deeper into the technologically advanced methodologies and manufacturing processes, we realize the power of the human mind. The mind which has developed a new league of technical procedures which have made our lives easy and working easier. Manual labor is on the way to extinction in a few years from now, thanks to the highly advanced machinery and automation industry. Automation, combined with the words automatic and execution have enabled a major chunk of processes to be executed without the human component. And with the amount of innovation taking place around the globe, it is surprising how robots and machinery have taken over the daunting human tasks.

But why do we support the intrusion of automation into our development process and how is it benefiting the industries? There is no doubt in the fact that machines can outperform humans in every aspect.
The precision and efficiency of an automatic machine are way better than a hundred humans working together. This is the most important reason as to why people prefer machines over man. While a human would numerous hours to assemble a product, a machine can manufacture and assemble the same within minutes. This not only saves a lot of time but expense as well. Automation in industries is a one-time investment which gives you long term benefits and efficient output. No doubt the machines demand maintenance, but it is still economical when compared to manual labor.

In huge manufacturing units, automation is a widespread concept which has taken over the human hand mainly because of the demand and supply chain where there is an excessively large need for manufactured goods.
But, it should also be noted that with an efficient hardware that goes into automating a factory, compatible and complementing software is also necessary. It is an intelligent software system that makes the machine efficient in providing optimum output. For this reason, ReadyDAQ your one stop shop for all the development needs has been created. It offers solutions to your software problems and is programmed to handle all operational devices such as pumps, motors, and sensors. A plug and play medium for devices, it helps the automation process in factories and industries by allowing the users to connect devices and without any major configuration or development operating it. It comes with a 30-day trial version to get a feel of the working before you actually make a purchase. So get yours today!

6 Steps on How to Learn or Teach LabVIEW OOP - Part 1

If you follow the NI training then you learn how to build a class on Thursday morning and by Friday afternoon you are introduced to design patterns. Similarly when I speak to people they seem keen to quickly get people on to learning design patterns – certainly, in the earlier days of adoption this topic always came up very early.
I think this is too fast. It adds additional complexity to learning OOP and personally I got very confused about where to begin.
Step 1 – The Basics
Learn how to make a class and the practical elements like how the private scope works. Use them instead of whatever you used before for modules. e.g. action engines or libraries. Don’t worry about inheritance or design patterns at this stage, that will come.
Step 2 – Practice!
Work with the encapsulation you now have and refine your design skills to make objects that are highly cohesive and easy to read. Does each class do one job? Great you have learned the single responsibility principle, the first of the SOLID principles of OO design. Personally, I feel this is the most important one.
If your classes are large then make them smaller until they do just one job. Also, pay attention to coupling. Try to design code that doesn’t couple too many classes together – this can be difficult at first but small, specific classes help.
Step 3 – Learn inheritance
Use dynamic dispatch methods to implement basic abstract classes when you need functionality that can be changed e.g. a simulated hardware class or support for two types of data logs. I’d look at the channeling pattern at this point too. Its a very simple pattern that uses inheritance and I have found helpful in a number of situations. But no peeking at the others!

Friday, 7 July 2017

Setting up LabVIEW Project

labview freelancer consultant
Complete the following steps to set up the LabVIEW project:
 
  1. Launch LabVIEW by selecting Start»All Programs»National Instruments»LabVIEW.
  2. Click the Empty Project link in the Getting Started window to display the Project Explorer window. You can also select File»New Project to display the Project Explorer window.
  3. Select Help and make sure that Show Context Help is checked. You can refer to the context help throughout this process for information about items in the Project Explorer window and in your VIs.
  4. Right-click the top-level Project item in the Project Explorer window and select New»Targets and Devices from the shortcut menu to display the Add Targets and Devices dialog box.
  5. Make sure that the Existing target or device radio button is selected.
  6. Expand Real-Time CompactRIO.
  7. Select the CompactRIO controller to add to the project and click OK.
  8. Select FPGA Interface from the Select Programming Mode dialog box to put the system into FPGA Interface programming mode.
  9. Tip Tip  Use the CompactRIO Chassis Properties dialog box to change the programming mode in an existing project. Right-click the CompactRIO chassis in the Project Explorer window and select Properties from the shortcut menu to display this dialog box.
  10. Click Discover in the Discover C Series Modules? dialog box if it appears.
  11. Click Continue.
  12. Drag and drop the C Series module(s) that will run in Scan Interface mode under the chassis item. Leave any modules you plan to write FPGA code for under the FPGA target.

Monday, 3 July 2017

CompactRIO Scan Mode Tutorial

Labview projects
This section will teach a person how to create a basic control application on CompactRIO using scan mode. One should see the LabVIEW FPGA Tutorial if the choice is to use the LabVIEW FPGA Interface. One should then have a new LabVIEW Project that consists of the existing CompactRIO system, including the controller, C Series I/O modules, and chassis. An NI 9211 Thermocouple input module will be used in this tutorial; nonetheless, for any analogue input module, the process can be followed.
1.       The project is saved by selecting File»Save and entering Basic control with scan mode. Click OK.
2.       This project will only consist of one VI, which is the LabVIEW Real-Time application that runs installed on the CompactRIO controller. Right-clicking on the CompactRIO real-time controller in the project and selecting New»VI saves the VI as RT.vi.This one is created by the VI.
3.       Three routines are included in the key operation of this application: start up, run, and shutdown. An effortless way to accomplish this order of operation is a flat sequence structure. Place with three frames on the existing RT.vi block diagram a flat sequence structure.
4.       Then, a timed loop to the Run frame of the sequence structure should be inserted. The capability to synchronise code to various time basis, including the NI Scan Engine that reads and writes scan mode I/O is provided by timed loops.
5.       If the timed loop is to be configured, one should double-click on the clock icon on the left input node.
6.       Now, select Synchronise to Scan Engine as the Loop Timing Source. Click OK. This will cause the code in the timed loop to execute once, instantly after each I/O scan, assuring that any I/O values used in this timed loop are the most recent ones.
7.      To run synchronised to the scan engine, the step before constructed the timed loop. Now, by right-clicking on the CompactRIO real-time controller in the LabVIEW Project and picking Properties, one should configure the rate of the scan engine itself.
8.       Then, choose Scan Engine from the categories on the left and enter 100ms as the Scan Period and all the I/O in the CompactRIO system to be updated every 100ms (10Hz). From this page, the Network Publishing Period can also be set, which regulates how often the I/O values are published to the network for remote monitoring and debugging. After that, click OK.
9.       Now that one has constructed the I/O scan rate, it is time to add the I/O reads to the existing application for control. One can simply drag and drop the I/O variables from the LabVIEW Project to the RT block diagram when using CompactRIO Scan Mode. Expand the CompactRIO real-time controller, chassis, and the I/O module the one would like to log. By clicking on it, select AI0, then drag and drop it into the timed loop on your RT.vi diagram.
10.   Now, in this project for speciality digital Pulse Width Modulated output, one should configure the digital module so the one can use a PWM signal to control the imaginary heater unit. Right click on the existing digital module in the project and select Properties, to do this. Select Specialty Digital Configuration and a Speciality Mode of Pulse-Width Modulation in the C Series Module Properties dialogue. Speciality Digital mode allows the existing module to perform to pattern based digital I/O at rates significantly faster than is available with the scan interface. Click OK and the existing module will now be in PWM mode.
11.   Then a person is ready to add the actual PWM output to the block diagram. To do so, widen the Mod2 object in the project and drag and drop the PWM0 item to the block diagram as it has been done with the AI0 I/O node in the previous step.
12.   After that, somebody will want to join the PID control logic to this program. Right click the block diagram to open the functions palette and click on the Search button in the top right of the palette, if one wants to do such a thing.
13.   Scan for PID and pick PID.vi in the Control Design and Simulation Palette and drag it to the actual block diagram of the timed loop and wire the PID VI.
14.   The set point input is not wired now. That is because it is best practice to keep user interface (UI) objects out of actual high priority control loop. If someone wants to interact with and adjust the actual set point at the run time, the one will want to create a control that can be interacted with in the lower priority loop. Also, if someone wants to create single process shared variables for I/O in the already existing high priority control loop, two controls in our application (set point and stop) are needed to create two new single process shared variables.

A single process is created and the variable is shared by right click on the actual RT CompactRIO Target in the LabVIEW Project and New >> Library should be selected. Rename the library into something perceptive like RTComm. Then, one should right click on the new library and select New>>Variable. That will open the Shared Variable Properties dialogue. The variable should be named SetPoint (for example, the name depends on person’s imagination) and “Single Process” should be selected for the variable type in the Variable Type drop down box. Finally, click on the RT FIFO option in the left-hand tree and click the Enable RT FIFO check box.
15.   In the library that has just been created, another single-process shared variable should be made. This variable is for the Stop control that is going to be created that will stop the program when it is needed. All the same settings as the previous Set Point variable except for the type this new variable should possess, and it should be Boolean.
16.   Next, some user interface should be created. Such a thing is done in Slide control, Waveform Chart, Numeric control, and Stop (Boolean) control.
17.   This program is supposed to be finished now by creating a secondary (non-timed) loop for the actual UI objects and finishing wiring the existing block diagram.
18.   Note the extension of I/O to the configuration and shutdown states to ensure that already existing I/O is in a known state when the program begins and ends. The basic control application should be ready to run.

Thursday, 29 June 2017

Getting Started with CompactRIO - Performing Basic Control

logger software 

The National Instruments Compact

An advanced embedded data and control acquisition system created for applications that require high performance and reliability equals RIO programmable automation controller. The system has open, embedded architecture, extreme ruggedness, small size, and flexibility, that engineers and embedded planners can use with COTS hardware to instantly build systems that are custom embedded. NI CompactRIO is powered by National Instruments LabVIEW FPGA and LabVIEW Real-Time technologies, it gives engineers the ability to program, design, and customize the CompactRIO embedded system with handy graphical programming tools.
This controller fuses a high-performance FPGA, an embedded real-time processor, and hot-swappable I/O modules. Every I/O module that grants low-level customization of timing and I/O signal processing is directly connected to the FPGA. The embedded real-time processor and the FPGA are connected via a high-speed PCI bus. A low-cost architecture with direct access to low-level hardware assets is shown by this. LabVIEW consists of built-in data transfer mechanisms that pass data from both the FPGA and the I/O modules to the FPGA to the embedded processor for real-time post-processing, analysis, data logging, or communication to a networked host CPU.

FPGA

A reconfigurable, high-performance chip that engineers may program with LabVIEW FPGA tools is the installed FPGA. FPGA designers were compelled to learn and use complex design languages such as VHDL to program FPGAs, and now, any scientist or engineer can adapt graphical LabVIEW tools to personalize and program FPGAs. One can implement custom triggering, timing, control, synchronization, and signal processing for an analog and digital I/O by using the FPGA hardware installed in CompactRIO.

C Series I/O Modules

A diversity of I/O types are accessible including current, voltage, thermocouple, accelerometer, RTD, and strain gauge inputs; 12, 24, and 48 V industrial digital I/O; up to ±60 V simultaneous sampling analogue I/O; 5 V/TTL digital I/O; pulse generation; counter/timers; and high voltage/current relays. People can frequently connect wires directly from the C Series modules to their actuators and sensors, for the modules contain built-in signal conditioning for extended voltage ranges or industrial signal samples.

Weight and Size

Demanding design requirements in many embedded applications are size, weight, and I/O channel density. A four-slot reconfigurable installed system weighs just 1.58 kg (3.47 lb) and measures 179.6 by 88.1 by 88.1 mm (7.07 by 3.47 by 3.47 in.).



Monday, 12 June 2017

I²C (INTER-INTEGRATED CIRCUIT)


I²C (Inter-Integrated Circuit), pronounced I-squared-C or I-two-C, is a multi-master, multi-slave, packet switched, single-ended, serial computer bus invented by Philips Semiconductor (now NXP Semiconductors). It is typically used for attaching lower-speed peripheral ICs to processors and microcontrollers in short-distance, intra-board communication. Alternatively, I²C is spelled I2C (pronounced I-two-C) or IIC (pronounced I-I-C).
Since October 10, 2006, no licensing fees are required to implement the I²C protocol. However, fees are still required to obtain I²C slave addresses allocated by NXP.
SMBus, defined by Intel in 1995, is a subset of I²C, defining a stricter usage. One purpose of SMBus is to promote robustness and interoperability. Accordingly, modern I²C systems incorporate some policies and rules from SMBus, sometimes supporting both I²C and SMBus, requiring only minimal reconfiguration either by commanding or output pin use.
I²C uses only two bidirectional open-drain lines, Serial Data Line (SDA) and Serial Clock Line (SCL), pulled up with resistors. Typical voltages used are +5 V or +3.3 V, although systems with other voltages are permitted.
The I²C reference design has a 7-bit or a 10-bit (depending on the device used) address space.Common I²C bus speeds are the 100 kbit/s standard mode and the 10 kbit/s low-speed mode, but arbitrarily low clock frequencies are also allowed. Recent revisions of I²C can host more nodes and run at faster speeds (400 kbit/s Fast mode, 1 Mbit/s Fast mode plus or Fm+, and 3.4 Mbit/s High-Speed mode). These speeds are more widely used on embedded systems than on PCs. There are also other features, such as 16-bit addressing.
Note the bit rates are quoted for the transactions between master and slave without clock stretching or other hardware overhead. Protocol overheads include a slave address and perhaps a register address within the slave device, as well as per-byte ACK/NACK bits. Thus the actual transfer rate of user data is lower than those peak bit rates alone would imply. For example, if each interaction with a slave inefficiently allows only 1 byte of data to be transferred, the data rate will be less than half the peak bit rate.
The maximal number of nodes is limited by the address space and also by the total bus capacitance of 400 pF, which restricts practical communication distances to a few meters. The relatively high impedance and low noise immunity require a common ground potential, which again restricts practical use to communication within the same PC board or a small system of boards.

Friday, 9 June 2017

WHAT IS RS422?

RS-422, also known as TIA/EIA-422, is a technical standard originated by the Electronic Industries Alliance that specifies electrical characteristics of a digital signaling circuit. Differential signaling can transmit data at rates as high as 10 Mbit/s, or may be sent on cables as long as 1500 meters. Some systems directly interconnect using RS-422 signals, or RS-422 converters may be used to extend the range of RS-232 connections. The standard only defines signal levels; other properties of a serial interface, such as electrical connectors and pin wiring, are set by other standards.
Several key advantages offered by this standard include the differential receiver, a differential driver and data rates as high as 10 Megabits per second at 12 meters (40 ft). Since the signal quality degrades with cable length, the maximum data rate decreases as cable length increases.
The maximum cable length is not specified in the standard, but guidance is given in its annex. (This annex is not a formal part of the standard, but is included for information purposes only.) Limitations on line length and data rate varies with the parameters of the cable length, balance, and termination, as well as the individual installation. Conservative maximum data rates with 24AWG UTP (POTS) cable are 10 Mbit/s at 12 m to 90 kbit/s at 1200 m.
RS-422 specifies the electrical characteristics of a single balanced signal. The standard was written to be referenced by other standards that specify the complete DTE/DCE interface for applications which require a balanced voltage circuit to transmit data. These other standards would define protocols, connectors, pin assignments and functions. Standards such as EIA-530 (DB-25 connector) and EIA-449 (DC-37 connector) use RS-422 electrical signals. Some RS-422 devices have 4 screw terminals for pairs of wire, with one pair used for data in one direction.
RS-422 cannot implement a truly multi-point communications network such as with EIA-485 since there can be only one driver on each pair of wires, however, one driver can be connected to up to ten receivers.
RS-422 can interoperate with interfaces designed to MIL-STD-188-114B, but they are not identical. RS-422 uses a nominal 0 to 5-volt signal while MIL-STD-188-114B uses a signal symmetric about 0 V. However the tolerance for common mode voltage in both specifications allows them to interoperate. Care must be taken with the termination network.
EIA-423 is a similar specification for unbalanced signaling (RS-423).
When used in relation to communications wiring, RS-422 wiring refers to cable made of 2 sets of twisted pair, often with each pair being shielded, and a ground wire. While a double pair cable may be practical for many RS-422 applications, the RS-422 specification only defines one signal path and does not assign any function to it. Any complete cable assembly with connectors should be labeled with the specification that defined the signal function and mechanical layout of the connector, such as RS-449.

Friday, 2 June 2017

3 Steps to Understand RS232 Devices

data acquisition system 
Having troubles with controlling your RS232 device? This article will certainly help you understand almost all of the hardware and software standards for RS232.

Step 1: Understand RS232 Connection & Signals

RS-232C, EIA RS-232, or simply RS-232, refers to the same standard defined by the Electronic Industries Association in 1969 for serial communication.
DTE stands for Data Terminal Equipment. Any computer is a DTE. DCE stands for Data Communication Equipment. Any modem is a DCE.
DTE normally comes with a Male Connector, while DCE comes with a Female Connector. However, that is not always the case. Fortunately, there is a simple way to confirm this:
Measure Pin 3 and Pin 5 of a DB-9 Connector with a Volt Meter, if you get a voltage of -3V to -15V, then it is a DTE device. If the voltage is on Pin 2, then it is a DCE device. Simple and easy.
A straight-through cable is used to connect a DTE (e.g. computer) to a DCE (e.g. modem), all signals in one side connected to the corresponding signals in the other side in a one-to-one basis. A crossover (null modem) cable is used to connect two DTE directly, it does not require a modem in between. They cross-transmit and receive data signals between the two sides and there are many variations on how the other control signals are wired.

Step 2: Learn about the Protocol

A protocol is one or a few sets of hardware and software rules agreed to by all communication parties for exchanging data correctly and efficiently.
Synchronous and Asynchronous Communications
Synchronous Communication requires the sender and receiver to share the same clock. The sender provides a timing signal to the receiver so that the receiver knows when to "read" the data. Synchronous Communication generally has higher data rates and greater error-checking capability. A printer is a form of Synchronous Communication.
Asynchronous Communication has no timing signal or clock. Instead, it inserts Start / Stop bits into each byte of data to "synchronize" the communication. As it uses fewer wires for communication (no clock signals), Asynchronous Communication is simpler and more cost-effective. RS-232 / RS-485 / RS-422 / TTL are the forms of Asynchronous Communications.

Drilling Down: Bits and Bytes

Internal computer communications consist of digital electronics, represented by only two conditions: ON or OFF. We represent these with two numbers: 0 and 1, which in the binary system is termed a Bit.
A Byte consists of 8 bits, which represents decimal number 0 to 255, or Hexadecimal number 0 to FF. As described above, a byte is the basic unit of Asynchronous communications.

Step 3: Control your RS232 devices

After reading and understanding the first two steps we’ve talked about, it is easy to now test and controls your RS232 devices in order to get the perfect feel of how they work.
ReadyDAQ offers software solutions for RS232 devices, make sure to check them out.

Thursday, 1 June 2017

INTRODUCTION TO RS232 SERIAL COMMUNICATION - PART 2

http://www.readydaq.com/daq
Assume we want to send the letter ‘A’ over the serial port. The binary representation of the letter ‘A’ is 01000001. Remembering that bits are transmitted from least significant bit (LSB) to most significant bit (MSB), the bit stream transmitted would be as follows for the line characteristics 8 bits, no parity, 1 stop bit, 9600 baud.

LSB (0 1 0 0 0 0 0 1 0 1) MSB
The above represents (Start Bit) (Data Bits) (Stop Bit)
To calculate the actual byte transfer rate simply divide the baud rate by the number of bits that must be transferred for each byte of data. In the case of the above example, each character requires 10 bits to be transmitted for each character. As such, at 9600 baud, up to 960 bytes can be transferred in one second.
The first article was talking about the “electrical/logical” characteristics of the data stream. We will expand the discussion to line protocol.
Serial communication can be half duplex or full duplex. Full duplex communication means that a device can receive and transmit data at the same time. Half duplex means that the device cannot send and receive at the same time. It can do them both, but not at the same time. Half duplex communication is all but outdated except for a very small focused set of applications.
Half duplex serial communication needs at a minimum two wires, signal ground, and the data line. Full duplex serial communication needs at a minimum three wires, signal ground, transmit data line and receive data line. The RS232 specification governs the physical and electrical characteristics of serial communications. This specification defines several additional signals that are asserted (set to logical 1) for information and control beyond the data signals and signals ground.
These signals are the Carrier Detect Signal (CD), asserted by modems to signal a successful connection to another modem, Ring Indicator (RI), asserted by modems to signal the phone ringing, Data Set Ready (DSR), asserted by modems to show their presence, Clear To Send (CTS), asserted by modems if they can receive data, Data Terminal Ready (DTR), asserted by terminals to show their presence, Request To Send (
RTS), asserted by terminals when they want to send data. The section RS232 Cabling describes these signals and how they are connected.
The above paragraph alluded to hardware flow control. Hardware flow control is a method that two connected devices use to tell each other electronically when to send or when not to send data. A modem in general drops (logical 0) its CTS line when it can no longer receive characters. It re-asserts it when it can receive again. A terminal does the same thing instead with the RTS signal. Another method of hardware flow control in practice is to perform the same procedure in the previous paragraph except that the DSR and DTR signals are used for the handshake.
Note that hardware flow control requires the use of additional wires. The benefit to this, however, is crisp and reliable flow control. Another method of flow control used is known as software flow control. This method requires a simple 3 wire serial communication link, transmit data, receive data, and signal ground. If using this method, when a device can no longer receive, it will transmit a character that the two devices agreed on. This character is known as the XOFF character. This character is generally a hexadecimal 13. When a device can receive again it transmits an XON character that both devices agreed to. This character is generally a hexadecimal 11.

Friday, 26 May 2017

Computerized Outputs

data logger
Digital Outputs require a similar investigation and large portions of indistinguishable contemplation from advanced data sources. These incorporate watchful thought of yield voltage go, greatest refresh rate, and most extreme drive current required. In any case, the yields likewise have various particular contemplations, as portrayed beneath. Relays have the benefit of high off impedance, low off spillage, low on resistance, irresoluteness amongst AC and DC flags, and implicit segregation. Be that as it may, they are mechanical gadgets and consequently give bring down unwavering quality and commonly slower reaction rates. Semi-conductor yields regularly have a favorable position in speed and unwavering quality.
Semiconductor changes additionally have a tendency to be littler than their mechanical reciprocals, so a semiconductor-based advanced yield gadget will commonly give more yields per unit volume. When utilizing DC semiconductor gadgets, be mindful so as to consider whether your framework requires the yield to sink or source current. To fulfill varying necessities,

Current Limiting/Fusing

Most yields, and especially those used to switch high streams (100 mA or something like that), offer some kind of yield security. There are three sorts most normally accessible. The first is a straightforward circuit. Cheap and dependable, the primary issue with circuits, is they can't be reset and should be supplanted when blown. The second sort of current constraining is given by a resettable breaker. Ordinarily, these gadgets are variable resistors. Once the current achieves a specific edge, their resistance starts to rise rapidly, at last constraining the current and stopping the current.
Once the culpable association is evacuated, the resettable circuit returns to its unique low impedance state. The third kind of limiter is a real current screen that turns the yield off if and when an overcurrent is recognized. This "controller" limiter has the upsides of not requiring substitution taking after an overcurrent occasion. Numerous usage of the controller setup additionally permits the overcurrent outing to be determined to a channel by channel premise, even with a solitary yield board.


Wednesday, 10 May 2017

Quadrature Encoders

Daq
Quadrature encoders are likewise used to quantify rakish relocation and turn. Not at all like alternate gadgets, we have portrayed in this article, these items give a computerized yield. There are two essential computerized yields which are as 90-degree out-of-state advanced heartbeat trains. The recurrence of the beats decides the rakish speed, while the relative stage between the two (+90°or - 90°) portrays the bearing of turn.
These heartbeat trains can be checked by numerous nonspecific DAQ counter frameworks with one of the outputs being associated with a counter clock while the other is associated with an up/down stick. In any case, the encoder is such a typical piece of numerous DAQ frameworks that numerous merchants give an interface particularly created to quadrature estimations. One thing that can't be resolved from the beat tallies alone is the outright position of the pole.
Thus, most encoder frameworks likewise give a "File" yield. This list flag produces a heartbeat at a known rakish position. Once a known position is distinguished, the supreme position can be controlled by including (or subtracting) the relative pivot to the known record position. Numerous encoders give differential yields, however, differential commotion resistance is sometimes required unless the electrical condition is extremely cruel (e.g., neighborhood circular segment welding stations) or the keeps running from the encoder to the DAQ framework are long (100s of feet or more). Committed Encoders are accessible from numerous sellers in an assortment of setups.
ICP/IEPE Piezoelectric Crystal Sensors When considering piezoelectric precious stone gadgets for use in a DAQ framework, the vast majority consider vibration and accelerometer sensors as these gems are the reason for the pervasive ICP/IEPE sensors. It is by and large comprehended that when you apply a drive on a piezoelectric precious stone it makes the gem twist marginally and that this distortion incites a quantifiable voltage over the gem. Another component of these gems is that a voltage set over an unstressed piezoelectric precious stone makes the gem "distort".
This miss happening is in reality little, additionally exceptionally very much carried on and unsurprising. Piezoelectric precious stones have turned into an extremely normal movement control gadget in frameworks that require little avoidances. Specifically, they are utilized as a part of a wide assortment of laser control frameworks and also a large group of other optical control applications. In such applications, a mirror is connected to the gem, and as the voltage connected to the gem is changed, the mirror moves.
In spite of the fact that the development is ordinarily not discernible by the human eye, at the wavelength of light, the development is significant. Driving these piezoelectric gadgets presents two fascinating difficulties. To start with, accomplishing the coveted development from a piezoelectric precious stone frequently requires huge voltages, however benevolently at low DC streams. Second, however, the precious stones have high DC impedances they additionally have high capacitance, and driving them at high rates is not a minor undertaking. Exceptional drivers, for example, UEI's PD-AOAMP-115 are regularly required as the run of the mill simple yield board does not offer the yield voltage or capacitive driveability required.

Monday, 27 March 2017

Calculate the Error and Eliminate It Mathematically

daq
On the off chance that you know the genuine distinction in coefficients of warm extension between the strain gage and the part being tried, it's hypothetically conceivable to numerically kill the error brought about by changes in temperature. Obviously, to do this, you likewise need to gauge the temperature precisely at the strain gage establishment. The strain gage development coefficients, in any case, are not for the most part accessible from the maker as they tend to change from clump to group. Despite the fact that conceivable, making up for temperature impacts utilizing just this "ascertained" strategy is at times done. More typical, yet at the same time not exceptionally normal, a pseudo-figured technique is performed. As opposed to utilizing a pre-or anticipated coefficient to ascertain the differential strain prompted by temperature transforms, it is conceivable to decide the capacity tentatively. The word capacity is utilized purposefully, as the real strain versus temperature bend is rarely straight, particularly over huge temperature changes. Notwithstanding, if the application permits the system's strain versus temperature bend to be resolved tentatively, it turns out to be genuinely clear to evacuate the error numerically.

Match the Strain Gauge to the Part Tested 

The utilization of various compounds/metals permits makers to give strain gages intended to coordinate the warm extension/withdrawal conduct of a wide assortment of materials generally subject to strain (and stress) testing. This sort of gage is alluded to as a "Self Temperature Compensated" (or STC) strain gage. These STC gauges are accessible from an assortment of producers and are determined for use in a wide variety of part materials. As you may envision, the more typical a metal, the better the odds are there is an STC gauge that matches. Notwithstanding, you may depend on having the capacity to locate a decent match for such materials as aluminum, metal, cast press, copper, carbon steel, stainless steel, titanium and some more. In spite of the fact that the match between the STC gauge and the part under test may not be flawless, it will regularly be sufficiently precise from solidifying to well past the breaking point of water. For more subtle elements on the exact precision to expect, you ought to contact your strain gage producer.

Use an Identical Strain Gauge in Another Leg of the Bridge 

Due to the ratiometric way of the Wheatstone connect, a moment, unstrained gage (regularly alluded to as a "sham" gauge) set in another leg of the scaffold will make up for temperature prompted strain. Take note of that the fake gauge ought to be indistinguishable to the "measuring" gauge and ought to be liable to a similar domain.
Strain gages have a tendency to be little, and have short warm time constants (i.e., their temperature changes rapidly in light of a temperature change around them), while the part under test may have generous warm mass and may change temperature gradually. Consequently, it is great practice to mount the fake gauge adjoining gage being measured. Be that as it may, it ought to be appended in such a route as not to be subjected to the initiated strain of the tried part. At times, with moderately thin subjects and when measuring twisting strain (instead of immaculate malleable or compressive strain), it might be conceivable to mount the spurious gage on the inverse side of a bar or pillar. For this situation, the temperature effect of the gauges is disposed of and the scale component of the yield is effectively multiplied.