Use "RFSA Acquire IQ Data in Blocks" VI

I wan to run the example "RFSA Acquire IQ Data in Blocks" VI.  It asks for the resource name.
I have a ni5660 = ni5600 + ni5620.  So in MAX ni5600 has an name and can be seen under NI-DAQmx Devices.  But i cant see ni5620 in that section.  I see it under Traditional NI-DAQ (Legacy) Devices and hence I cant give it a name.
Also ni5600 in NI-DAQmx Devices has no Digitizer associated with it.
I went through this link and it was not of much help.  http://digital.ni.com/public.nsf/allkb/ABBB55946EB39A02862571FD0052024C
I am attaching the example and the screen shots on MAX.
Attachments:
RFSA Acquire IQ Data in Blocks.vi ‏45 KB
max.JPG ‏105 KB

Hello Om,
The RFSA Acquire IQ Data in Blocks VI uses the RFSA driver not the 5660 driver.  The RFSA VIs require the 5661 = 5600 and 5142.  Both of these devices are DAQmx devices.  However the 5660 = 5600 and 5620, where the 5620 is only a Traditional DAQ device.  You only have to associate a digitizer with the 5600 if you use it with the 5142 and the RFSG driver.  The behavior you are seeing is expected.
All of the examples you can run will start with ni5660 and perhaps the ni5660 Fetch IQ Data will work for you.
Regards,
Jesse O.
Applications Engineering
National Instruments
Jesse O. | National Instruments R&D

Similar Messages

  • Conversion from scaled ton unscaled data using Graph Acquired Binary Data

    Hello !
    I want to acquire temperature with a pyrometer using a PCI 6220 (analog input (0-5V)). I'd like to use the VI
    Cont Acq&Graph Voltage-To File(Binary) to write the data into a file and the VI Graph Acquired Binary Data to read it and analyze it. But in this VI, I didn't understand well the functionnement of "Convert unscaled to scaled data", I know it takes informations in the header to scale the data but how ?
    My card will give me back a voltage, but how can I transform it into temperature ? Can I configure this somewhere, and then the "Convert unscaled to scaled data" will do it, or should I do this myself with a formula ?
    Thanks.

    Nanie, I've used these example extensively and I think I can help. Incidently, there is actually a bug in the examples, but I will start a new thread to discuss this (I haven't written the post yet, but it will be under "Bug in Graph Acquired Binary Data.vi:create header.vi Example" when I do get around to posting it). Anyway, to address your questions about the scaling. I've included an image of the block diagram of Convert Unscaled to Scaled.vi for reference.
    To start, the PCI-6220 has a 16bit resolution. That means that the range (±10V for example) is broken down into 2^16 (65536) steps, or steps of ~0.3mV (20V/65536) in this example. When the data is acquired, it is read as the number of steps (an integer) and that is how you are saving it. In general it takes less space to store integers than real numbers. In this case you are storing the results in I16's (2 bytes/value) instead of SGL's or DBL's (4 or 8 bytes/value respectively).
    To convert the integer to a scaled value (either volts, or some other engineering unit) you need to scale it. In the situation where you have a linear transfer function (scaled = offset + multiplier * unscaled) which is a 1st order polynomial it's pretty straight forward. The Convert Unscaled to Scaled.vi handles the more general case of scaling by an nth order polynomial (a0*x^0+a1*x^1+a2*x^2+...+an*x^n). A linear transfer function has two coefficients: a0 is the offset, and a1 is the multiplier, the rest of the a's are zero.
    When you use the Cont Acq&Graph Voltage-To File(Binary).vi to save your data, a header is created which contains the scaling coefficients stored in an array. When you read the file with Graph Acquired Binary Data.vi those scaling coefficients are read in and converted to a two dimensional array called Header Information that looks like this:
    ch0 sample rate, ch0 a0, ch0 a1, ch0 a2,..., ch0 an
    ch1 sample rate, ch1 a0, ch1 a1, ch1 a2,..., ch1 an
    ch2 sample rate, ch2 a0, ch2 a1, ch2 a2,..., ch2 an
    The array then gets transposed before continuing.
    This transposed array, and the unscaled data are passed into Convert Unscaled to Scaled.vi. I am probably just now getting to your question, but hopefully the background makes the rest of this simple. The Header Information array gets split up with the sample rates (the first row in the transposed array), the offsets (the second row), and all the rest of the gains entering the for loops separately. The sample rate sets the dt for the channel, the offset is used to intialize the scaled data array, and the gains are used to multiply the unscaled data. With a linear transfer function, there will only by one gain for each channel. The clever part of this design is that nothing has to be changed to handle non-linear polynomial transfer functions.
    I normally just convert everything to volts and then manually scale from there if I want to convert to engineering units. I suspect that if you use the express vi's (or configure the task using Create DAQmx Task in the Data Neighborhood of MAX) to configure a channel for temperature measurement, the required scaling coefficients will be incorporated into the Header Information array automatically when the data is saved and you won't have to do anything manually other than selecting the appropriate task when configuring your acquisition.
    Hope this answers your questions.
    ChrisMessage Edited by C. Minnella on 04-15-2005 02:42 PM
    Attachments:
    Convert Unscaled to Scaled.jpg ‏81 KB

  • How can I find an example about acquiring waveform data by software trigger using PXI 4070 DMM?

    Anybody could proivde an example or simliar about acquiring waveform data by software trigger using PXI 4070 DMM?
    Thanks!

    hi there
    from the NI main page go to the developer zone http://www.ni.com/devzone/dev_exchange/ex_search.h​tm. select "LabVIEW" and "Digital Multimeter (DMM)" and search for "4070". then you'll find some examples.
    Best regards
    chris
    CL(A)Dly bending G-Force with LabVIEW
    famous last words: "oh my god, it is full of stars!"

  • RFSA Fetch IQ data type and base band frequency knowledge

    Hi,
    Could someone please answer the following two questions -
    In the RFSA Acquire Continuous IQ vi, the output of the step ‘niRFSA Fetch IQ’ shows waveform data type (which from my understanding is analog waveform). However, in 5663help and documents it is mentioned that 5622 digitally down converts the signal after frequency translation is done by 5601 and 565. Then Lab View can read this data in digital format. Then why the output data from ‘niRFSA Fetch IQ’ is not a digital waveform?
    If a single tone RF signal (say of 1GHz) is generated using 5673, is there any way I can get to know what is the baseband frequency which is up converted by 5450 and 5652 to produce this RF of 1GHz?
    Thanks so much,
    Sharmi

    Hi Sharmi,
    You are correct, the waveform data type that is being output is essentially in an analog form and not a digital form.  This is because if the output of the "niRFSA Fetch IQ" was in a digital form it would be unrecognizable since the components of a digital waveform are built of 1's and 0's.  The "niRFSA Fetch IQ" takes the digital data and converts it into an analog representation so that we are able to visually observe it on the front panel of LabVIEW.
    As for your second question, because you are doing a single tone generation the LO Source, the 5652, is generating a 1GHs signal which requires no modulation so there is no frequency that is unconverted.  
    Regards,
    Marcus
    Marcus M.
    PXI Product Support Engineer
    National Instruments

  • How to acquire 1 data point per trigger ?

    Dear all, anyone can help me in acquiring 1 data point per trigger? just as a Sample/hold circuit?
    I changed the example from Labview "Acquire 1 point digital trig", however, it doesn't work, it says data acquiring is inconsistent with buffer. The program is attached.
    and I try to use "AI Sample channels.vi", but there is not trigger input and sampling rate input.
    Thank you very much for your help.
    Attachments:
    Acquire1PointTrig.vi ‏81 KB

    There is a good example that can do almost exactly what you want to do.  It is called Acq&Graph Voltage - Int Clk - HW Trig Restarts.vi.  There are only three things you need to change.  Change the triggering VI to trigger off of a digital signal, then change the read VI to read a single channel and single sample and finally, change the Samples per Channel to 2.  I have included screenshots of the VI.
    The VI works by setting up a digital start trigger that is retriggerable, so that you can have this trigger start the acquisition over and over.  Once the trigger happens, a sample is acquired, read, and then the hardware waits for another trigger.
    Have a great day,
    Brian
    Message Edited by Brian C on 03-28-2007 01:48 PM
    Brian Coalson
    Software Engineer
    National Instruments
    Attachments:
    block diagram.Jpg ‏54 KB
    front panel.Jpg ‏58 KB

  • Custom Map using latitude and longitude data points

    Hi,
    I am new to Apex and I want to lost custom data points using latitude and longitude data points. I
    have seen posts referring to the chart example (http://apex.oracle.com/pls/apex/f?p=36648:65:2214483882702::NO:::) ; could someone help me with the following:
    1) How to add On Demand Application Process to a map page (step 4 in the demo)
    2) What is a hidden item and how to add it to a page (step 6)
    Any help would be greatly appreciated.
    Kind regards,
    Lisa

    I am trying to do the same thing. I have got the get_data function working to create the desired output. However when I replace the xml <data> block with &P65_DATA, it does not work. If I display P65_DATA on the page, it has correct output. If I cut and paste the output into custom XML, it works fine. Anyone have come across this issue..any ideas how to fix it?

  • How do I use the High Speed Data Logger with multiple I/O devices?

    I am using the High Speed Data Logger vi to read from a 16 channel A/D card (NI PCI-MIO-16E). The project may require more than 16 channels. How can I use High Speed Data Logger to read from two A/D cards? Will it be able to write the data to one file?

    The High Speed Data Logger vi will not acquire and right to multiple DAQ boards at the same time without modification. LabVIEW is more than capable of doing this what you are trying to do, but you will have to modify the code.
    Regards,
    Anuj D.

  • What table and fileds we have to use to develop a report for blocked

    what table and fileds we have to use to develop a report for blocked invoices?

    VBRK-RFBSK
    <b>     Error in Accounting Interface
    A     Billing document blocked for forwarding to FI
    B     Posting document not created (account determ.error)
    C     Posting document has been created
    D     Billing document is not relevant for accounting
    E     Billing Document Canceled
    F     Posting document not created (pricing error)
    G     Posting document not created (export data missing)
    H     Posted via invoice list
    I     Posted via invoice list (account determination error)
    K     Accounting document not created (no authorization)
    L     Billing doc. blocked for transfer to manager (only IS-OIL)
    M     Analyst Approval refused (only IS-OIL)
    N     No posting document due to fund management (only IS-PS)</b>
    Regards
    prabhu

  • Use of raise in exception handling block

    what is the use of raise in exception handling block for eg.
    declare
    a number;
    b emp.empno%type;
    begin
    begin
    SELECT empno INTO a FROM emp where 1=2;
    exception
    when others then
    dbms_output.put_line('inner');
    raise;
    end;
    exception
    when no_data_found then
    dbms_output.put_line('outer');
    end;
    output will be like below ..
    inner
    outer
    PL/SQL procedure successfully completed.
    my question is wht is the use of using raise in exception handing part, is there any specific reason we use in the development ????
    Regards,
    AAK.

    In the first block, you do not raise you user-defined exception WHEN_NO_DATA_FOUND, but the predefined one, which is raised to the WHEN OTHERS exception handler.
    Consider:
    SQL> declare
      2     a number;
      3     my_err exception;
      4     no_data_found exception;
      5  begin
      6     begin
      7        select 1 into a from dual where 1=2;
      8     exception
      9     when no_data_found then
    10        dbms_output.put_line(' In system defined');
    11        raise my_err;
    12     end;
    13  exception
    14     when my_err then
    15        dbms_output.put_line('In User Defined');
    16     when others then
    17        declare
    18           v_sqlerrm varchar2(100);
    19        begin
    20           v_sqlerrm := sqlerrm;
    21        dbms_output.put_line(' In when others '||sqlerrm);
    22        end;
    23  end;
    24  /
    In when others ORA-01403: no data found
    PL/SQL procedure successfully completed.
    SQL> ed
    Wrote file afiedt.buf
      1  declare
      2     a number;
      3     my_err exception;
      4     --no_data_found exception;
      5  begin
      6     begin
      7        select 1 into a from dual where 1=2;
      8     exception
      9     when no_data_found then
    10        dbms_output.put_line(' In system defined');
    11        raise my_err;
    12     end;
    13  exception
    14     when my_err then
    15        dbms_output.put_line('In User Defined');
    16     when others then
    17        declare
    18           v_sqlerrm varchar2(100);
    19        begin
    20           v_sqlerrm := sqlerrm;
    21        dbms_output.put_line(' In when others '||sqlerrm);
    22        end;
    23* end;
    SQL> /
    In system defined
    In User Defined
    PL/SQL procedure successfully completed.
    SQL>Note that in the second block, I deleted the declaration of the user defined exception NO_DATA_FOUND.
    This is taken from the documentation:
    Redeclaring Predefined Exceptions
    Remember, PL/SQL declares predefined exceptions globally in package STANDARD, so you need not declare them yourself. Redeclaring predefined exceptions is error prone because your local declaration overrides the global declaration. For example, if you declare an exception named invalid_number and then PL/SQL raises the predefined exception INVALID_NUMBER internally, a handler written for INVALID_NUMBER will not catch the internal exception. In such cases, you must use dot notation to specify the predefined exception, as follows:
    EXCEPTION
      WHEN invalid_number OR STANDARD.INVALID_NUMBER THEN
        -- handle the error
    END;You can read yourself :
    http://download-uk.oracle.com/docs/cd/B19306_01/appdev.102/b14261/errors.htm
    Regards,
    Gerd

  • I am trying to use IMAQ acquire and other IMAQ functions with Queue functions.

    i am trying to use IMAQ acquire and other IMAQ functions with Queue functions.
    i mean i would like to acquire the image in a queue and deque it afterwards. would be the queue function accept the IMAQ data type?

    Refer posting, http://exchange.ni.com/servlet/Redirect?id=8879554

  • Acquire continuous data but write to excel at intervals

    I am acquiring continuous data (voltage & temp) from a DAQmx unit. At the same time I would like to record to a spreadsheet X number of samples (say 100) at 1minute intervals only. Also, I would like to really only record two values from those 100 samples (a min & max value). I have a working code, however I cannot figure out how to write to spreadsheet at the 1 minute intervals for X number of samples. So I am stuck. TIA
    Solved!
    Go to Solution.
    Attachments:
    Prog3.vi ‏220 KB

    Thanks for the info. I will have to examine how to modify my code to use the producer/consumer design. Would I simply put my acquisition of data in the first loop, with my graphs etc. Then create a second while loop to record the data. I dont under stand the third part of the diagram "release queue". Sorry I'm a NOOB!

  • Fix for "Graph Acquired Binary Data.vi" in daqmx 7.4's "Measure Voltage.llb". (factory gains are reversed)

    While perusing binary data collected from the SCXI 1102 and usb 1600, the gains were found
    to be reversed using daqmx writer/reader example code from NI. This bug only surfaces if your using different gains amongst the channels.
    http://sine.ni.com/apps/we/niepd_web_display.display_epd4?p_guid=F7199E619CBF4215E0340003BA230ECF&p_node=201208&p_submitted=Y&p_rank=5&p_answer=yes
    has the fix for the binary to floating point viewer vi, "Graph Acquired Binary Data.vi".
    Enjoy.
    Best,
    Davy Baker
    GAMI

    duplicate post
    Continue in other thread
    Message Edited by smercurio_fc on 11-14-2008 09:10 AM

  • Data corrupt block

    os Sun 5.10 oracle version 10.2.0.2 RAC 2 node
    alert.log 내용
    Hex dump of (file 206, block 393208) in trace file /oracle/app/oracle/admin/DBPGIC/udump/dbpgic1_ora_1424.trc
    Corrupt block relative dba: 0x3385fff8 (file 206, block 393208)
    Bad header found during backing up datafile
    Data in bad block:
    type: 32 format: 0 rdba: 0x00000001
    last change scn: 0x0000.98b00394 seq: 0x0 flg: 0x00
    spare1: 0x1 spare2: 0x27 spare3: 0x2
    consistency value in tail: 0x00000001
    check value in block header: 0x0
    block checksum disabled
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    Reread of blocknum=393208, file=/dev/md/vg_rac06/rdsk/d119. found same corrupt data
    corrupt 발생한 Block id 를 검색해 보면 Block id 가 검색이 안됩니다.
    dba_extents 로 검색
    corrupt 때문에 Block id 가 검색이 안되는 것인지 궁금합니다.
    export 받으면 데이타는 정상적으로 export 가능.

    다행이네요. block corruption 이 발생한 곳이 데이터가 저장된 블록이
    아닌 것 같습니다. 그것도 rman백업을 통해서 발견한 것 같는데
    맞는지요?
    scn이 scn: 0x0000.00000000 가 아닌
    0x0000.98b00394 인 것으로 봐서는 physical corrupt 가 아닌
    soft corrupt인 것 같습니다.
    그렇다면 버그일 가능성이 높아서 찾아보니
    Bug 4411228 - Block corruption with mixture of file system and RAW files
    의 버그가 발견되었습니다. 이것이 아닐 수도 있지만..
    이러한 block corruption에 대한 처리방법 및 원인분석은
    오라클(주)를 통해서 정식으로 요청하셔야 합니다.
    metalink를 통해서 SR 요청을 하십시오.
    export는 high water mark 이후의 block corruption을 찾아내지 못하고 이외에도
    아래 몇가지 경우에서도 찾아내지 못합니다.
    db verify( dbv)의 경우에는 physical corruption은 찾아내지 못하고
    soft block corruption만 찾아낼 수 있습니다.
    경험상 physical corruption 이 발생하였으나 /dev/null로
    datafile copy가 안되는데도 dbv로는 이 문제를 찾아내지
    못하였습니다.
    그렇다면 가장 좋은 방법은 rman 입니다. rman은 high water mark까지의
    데이터를 백업해주면서 전체 데이터파일에 대한 체크를 하기도 합니다.
    physical corruption뿐만 아니라 logical corruption도 체크를
    하니 점검하기로는 rman이 가장 좋은 방법이라 생각합니다.
    The Export Utility
    # Use a full export to check database consistency
    # Export performs a full scan for all tables
    # Export only reads:
    - User data below the high-water mark
    - Parts of the data dictionary, while looking up information concerning the objects being exported
    # Export does not detect the following:
    - Disk corruptions above the high-water mark
    - Index corruptions
    - Free or temporary extent corruptions
    - Column data corruption (like invalid date values)
    block corruption을 정상적으로 복구하는 방법은 restore 후에
    복구하는 방법이 있겠으나 이미 restore할 백업이 block corruption이
    발생했을 수도 있습니다. 그러므로 다른 서버에 restore해보고
    정상적인 datafile인 것을 확인 후에 실환경에 restore하는 것이 좋습니다.
    만약 백업본까지 block corruption이 발생하였거나 또는 시간적 여유가
    없을 경우에는 table을 move tablespace 또는 index rebuild를 통해서
    다른 테이블스페이스로 데이터를 옮겨두고 문제가 발생한 테이블스페이스를
    drop해버리고 재생성 하는 것이 좋을 것 같습니다.(지금 현재 데이터의
    손실은 없으니 move tablespace, rebuild index 방법이 좋겠습니다.
    Handling Corruptions
    Check the alert file and system log file
    Use diagnostic tools to determine the type of corruption
    Dump blocks to find out what is wrong
    Determine whether the error persists by running checks multiple times
    Recover data from the corrupted object if necessary
    Preferred resolution method: media recovery
    Handling Corruptions
    Always try to find out if the error is permanent. Run the analyze command multiple times or, if possible, perform a shutdown and a startup and try again to perform the operation that failed earlier.
    Find out whether there are more corruptions. If you encounter one, there may be other corrupted blocks, as well. Use tools like DBVERIFY for this.
    Before you try to salvage the data, perform a block dump as evidence to identify the actual cause of the corruption.
    Make a hex dump of the bad block, using UNIX dd and od -x.
    Consider performing a redo log dump to check all the changes that were made to the block so that you can discover when the corruption occurred.
    Note: Remember that when you have a block corruption, performing media recovery is the recommended process after the hardware is verified.
    Resolve any hardware issues:
    - Memory boards
    - Disk controllers
    - Disks
    Recover or restore data from the corrupt object if necessary
    Handling Corruptions (continued)
    There is no point in continuing to work if there are hardware failures. When you encounter hardware problems, the vendor should be contacted and the machine should be checked and fixed before continuing. A full hardware diagnostics should be run.
    Many types of hardware failures are possible:
    Bad I/O hardware or firmware
    Operating system I/O or caching problem
    Memory or paging problems
    Disk repair utilities
    아래 관련 자료를 드립니다.
    All About Data Blocks Corruption in Oracle
    Vijaya R. Dumpa
    Data Block Overview:
    Oracle allocates logical database space for all data in a database. The units of database space allocation are data blocks (also called logical blocks, Oracle blocks, or pages), extents, and segments. The next level of logical database space is an extent. An extent is a specific number of contiguous data blocks allocated for storing a specific type of information. The level of logical database storage above an extent is called a segment. The high water mark is the boundary between used and unused space in a segment.
    The header contains general block information, such as the block address and the type of segment (for example, data, index, or rollback).
    Table Directory, this portion of the data block contains information about the table having rows in this block.
    Row Directory, this portion of the data block contains information about the actual rows in the block (including addresses for each row piece in the row data area).
    Free space is allocated for insertion of new rows and for updates to rows that require additional space.
    Row data, this portion of the data block contains rows in this block.
    Analyze the Table structure to identify block corruption:
    By analyzing the table structure and its associated objects, you can perform a detailed check of data blocks to identify block corruption:
    SQL> analyze table_name/index_name/cluster_name ... validate structure cascade;
    Detecting data block corruption using the DBVERIFY Utility:
    DBVERIFY is an external command-line utility that performs a physical data structure integrity check on an offline database. It can be used against backup files and online files. Integrity checks are significantly faster if you run against an offline database.
    Restrictions:
    DBVERIFY checks are limited to cache-managed blocks. It’s only for use with datafiles, it will not work against control files or redo logs.
    The following example is sample output of verification for the data file system_ts_01.dbf. And its Start block is 9 and end block is 25. Blocksize parameter is required only if the file to be verified has a non-2kb block size. Logfile parameter specifies the file to which logging information should be written. The feedback parameter has been given the value 2 to display one dot on the screen for every 2 blocks processed.
    $ dbv file=system_ts_01.dbf start=9 end=25 blocksize=16384 logfile=dbvsys_ts.log feedback=2
    DBVERIFY: Release 8.1.7.3.0 - Production on Fri Sep 13 14:11:52 2002
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    Output:
    $ pg dbvsys_ts.log
    DBVERIFY: Release 8.1.7.3.0 - Production on Fri Sep 13 14:11:52 2002
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    DBVERIFY - Verification starting : FILE = system_ts_01.dbf
    DBVERIFY - Verification complete
    Total Pages Examined : 17
    Total Pages Processed (Data) : 10
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index) : 2
    Total Pages Failing (Index) : 0
    Total Pages Processed (Other) : 5
    Total Pages Empty : 0
    Total Pages Marked Corrupt : 0
    Total Pages Influx : 0
    Detecting and reporting data block corruption using the DBMS_REPAIR package:
    Note: Note that this event can only be used if the block "wrapper" is marked corrupt.
    Eg: If the block reports ORA-1578.
    1. Create DBMS_REPAIR administration tables:
    To Create Repair tables, run the below package.
    SQL> EXEC DBMS_REPAIR.ADMIN_TABLES(‘REPAIR_ADMIN’, 1,1, ‘REPAIR_TS’);
    Note that table names prefix with ‘REPAIR_’ or ‘ORPAN_’. If the second variable is 1, it will create ‘REAIR_key tables, if it is 2, then it will create ‘ORPAN_key tables.
    If the thread variable is
    1 then package performs ‘create’ operations.
    2 then package performs ‘delete’ operations.
    3 then package performs ‘drop’ operations.
    2. Scanning a specific table or Index using the DBMS_REPAIR.CHECK_OBJECT procedure:
    In the following example we check the table employee for possible corruption’s that belongs to the schema TEST. Let’s assume that we have created our administration tables called REPAIR_ADMIN in schema SYS.
    To check the table block corruption use the following procedure:
    SQL> VARIABLE A NUMBER;
    SQL> EXEC DBMS_REPAIR.CHECK_OBJECT (‘TEST’,’EMP’, NULL,
    1,’REPAIR_ADMIN’, NULL, NULL, NULL, NULL,:A);
    SQL> PRINT A;
    To check which block is corrupted, check in the REPAIR_ADMIN table.
    SQL> SELECT * FROM REPAIR_ADMIN;
    3. Fixing corrupt block using the DBMS_REPAIR.FIX_CORRUPT_BLOCK procedure:
    SQL> VARIABLE A NUMBER;
    SQL> EXEC DBMS_REPAIR.FIX.CORRUPT_BLOCKS (‘TEST’,’EMP’, NULL,
    1,’REPARI_ADMIN’, NULL,:A);
    SQL> SELECT MARKED FROM REPAIR_ADMIN;
    If u select the EMP table now you still get the error ORA-1578.
    4. Skipping corrupt blocks using the DBMS_REPAIR. SKIP_CORRUPT_BLOCK procedure:
    SQL> EXEC DBMS_REPAIR. SKIP_CORRUPT.BLOCKS (‘TEST’, ‘EMP’, 1,1);
    Notice the verification of running the DBMS_REPAIR tool. You have lost some of data. One main advantage of this tool is that you can retrieve the data past the corrupted block. However we have lost some data in the table.
    5. This procedure is useful in identifying orphan keys in indexes that are pointing to corrupt rows of the table:
    SQL> EXEC DBMS_REPAIR. DUMP ORPHAN_KEYS (‘TEST’,’IDX_EMP’, NULL,
    2, ‘REPAIR_ADMIN’, ‘ORPHAN_ADMIN’, NULL,:A);
    If u see any records in ORPHAN_ADMIN table you have to drop and re-create the index to avoid any inconsistencies in your queries.
    6. The last thing you need to do while using the DBMS_REPAIR package is to run the DBMS_REPAIR.REBUILD_FREELISTS procedure to reinitialize the free list details in the data dictionary views.
    SQL> EXEC DBMS_REPAIR.REBUILD_FREELISTS (‘TEST’,’EMP’, NULL, 1);
    NOTE
    Setting events 10210, 10211, 10212, and 10225 can be done by adding the following line for each event in the init.ora file:
    Event = "event_number trace name errorstack forever, level 10"
    When event 10210 is set, the data blocks are checked for corruption by checking their integrity. Data blocks that don't match the format are marked as soft corrupt.
    When event 10211 is set, the index blocks are checked for corruption by checking their integrity. Index blocks that don't match the format are marked as soft corrupt.
    When event 10212 is set, the cluster blocks are checked for corruption by checking their integrity. Cluster blocks that don't match the format are marked as soft corrupt.
    When event 10225 is set, the fet$ and uset$ dictionary tables are checked for corruption by checking their integrity. Blocks that don't match the format are marked as soft corrupt.
    Set event 10231 in the init.ora file to cause Oracle to skip software- and media-corrupted blocks when performing full table scans:
    Event="10231 trace name context forever, level 10"
    Set event 10233 in the init.ora file to cause Oracle to skip software- and media-corrupted blocks when performing index range scans:
    Event="10233 trace name context forever, level 10"
    To dump the Oracle block you can use below command from 8.x on words:
    SQL> ALTER SYSTEM DUMP DATAFILE 11 block 9;
    This command dumps datablock 9 in datafile11, into USER_DUMP_DEST directory.
    Dumping Redo Logs file blocks:
    SQL> ALTER SYSTEM DUMP LOGFILE ‘/usr/oracle8/product/admin/udump/rl. log’;
    Rollback segments block corruption, it will cause problems (ORA-1578) while starting up the database.
    With support of oracle, can use below under source parameter to startup the database.
    CORRUPTEDROLLBACK_SEGMENTS=(RBS_1, RBS_2)
    DB_BLOCK_COMPUTE_CHECKSUM
    This parameter is normally used to debug corruption’s that happen on disk.
    The following V$ views contain information about blocks marked logically corrupt:
    V$ BACKUP_CORRUPTION, V$COPY_CORRUPTION
    When this parameter is set, while reading a block from disk to catch, oracle will compute the checksum again and compares it with the value that is in the block.
    If they differ, it indicates that the block is corrupted on disk. Oracle makes the block as corrupt and signals an error. There is an overhead involved in setting this parameter.
    DB_BLOCK_CACHE_PROTECT=‘TRUE’
    Oracle will catch stray writes made by processes in the buffer catch.
    Oracle 9i new RMAN futures:
    Obtain the datafile numbers and block numbers for the corrupted blocks. Typically, you obtain this output from the standard output, the alert.log, trace files, or a media management interface. For example, you may see the following in a trace file:
    ORA-01578: ORACLE data block corrupted (file # 9, block # 13)
    ORA-01110: data file 9: '/oracle/dbs/tbs_91.f'
    ORA-01578: ORACLE data block corrupted (file # 2, block # 19)
    ORA-01110: data file 2: '/oracle/dbs/tbs_21.f'
    $rman target =rman/rman@rmanprod
    RMAN> run {
    2> allocate channel ch1 type disk;
    3> blockrecover datafile 9 block 13 datafile 2 block 19;
    4> }
    Recovering Data blocks Using Selected Backups:
    # restore from backupset
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 FROM BACKUPSET;
    # restore from datafile image copy
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 FROM DATAFILECOPY;
    # restore from backupset with tag "mondayAM"
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 199 FROM TAG = mondayAM;
    # restore using backups made before one week ago
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE
    UNTIL 'SYSDATE-7';
    # restore using backups made before SCN 100
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE UNTIL SCN 100;
    # restore using backups made before log sequence 7024
    BLOCKRECOVER DATAFILE 9 BLOCK 13 DATAFILE 2 BLOCK 19 RESTORE
    UNTIL SEQUENCE 7024;
    글 수정:
    Min Angel (Yeon Hong Min, Korean)

  • About data for blocked sales order

    hi friends,
    i have one requirement . i.e i want to create the BLOCKED SALES ORDER , whate are the fields require for the creation.
    if there is any BAPI for this please tell me.
    help me
    naresh

    Hello Naresh.
    You can Show your blocked Sales order Using table MVKE(Sales Data for Material) and TVMS(Materials: Status in Sales and Distribution).
    Logic is :1. Check Field MVKE-VMSTA.
                 2. find corresponding entries of  MVKE-VMSTA in Table TVMS.
                     by matching  MVKE-VMSTA = TVMS-VMSTA.
                 3. For Above same Entries see the Blocked check in Field TVMS-SPVBC.
                    IF  TVMS-SPVBC = 'B' means this sales order is blocked.
    and check all matnr of MVKE according to this logic.
    In case any Problem,you can let me know so that i can help you in better ways.
    Have a Nice Day,
    Regards,
    Sujeet.

  • Programming NI-6052 using VC++ to Display data in Graph or Chart

    Hi
    I am using Visual C to acquire the data and display it . I successfully run the example shipped with Labview CD  located in (C:\programm files\Naitonal Instruments\nidaq\examples\ANSI C). for AI of single Voltage channel. 
    But how to display the Acquired data to graph or Chart ? and also how to Store in a file?
    help will be appreciated
    thanks

    Hello,
    I've only heard that Gigasoft's charting is absolutely an improvement over
    Measurement Studio's built in charting. Faster, way more robust, way better
    rendering and far more attention to detail. Plus an easy API to use as
    needed independent of development platform. Please visit www.gigasoft.com
    for engineering, scientific, and instrument/oscilloscope type charting for
    Measurement Studio, MFCs, .NET and others.
    best regards,
    Robert Dede
    Gigasoft, Inc.
    www.gigasoft.com
    "Haider Abbas" <[email protected]> wrote in message
    news:[email protected]..
    > Hi
    > I am using Visual C to acquire the data and display it . I successfully
    > run the example shipped with Labview CD&nbsp; located in (C:\programm
    > files\Naitonal Instruments\nidaq\examples\ANSI C). for AI of single
    > Voltage channel.&nbsp;
    > But how to display the Acquired data to graph or Chart&nbsp;? and also how
    > to Store in a file?
    > help will be appreciated
    > thanks
    > &nbsp;
    > &nbsp;

Maybe you are looking for

  • How do I set an external HD permission to read AND write?

    I received an iMac for Christmas and need a little help.  I have a 2TB external HD that contains .shn and .flac files of GD concerts (and other music).  I managed to find Cog that will play them...but what I want to do is download them and move them

  • Word 2010 ActiveX control makes combo box malfunction

    Good day. I'm teaching my students how to make electronic forms in Word 2007 / 2010. As there is no radio button option in the content / legacy controls, I've tried using the ActiveX option button for this purpose. However, as soon as I insert this c

  • Space issue with comn top & ora  top

    I have a partition which is 96% full. It has comn top and ora top. I want to know is there some files which i may delete? i came across a directory $COMMON_TOP/temp which has files like->OFVAAndaOpC.t I want to know what is the use of these files and

  • PDF and multi-media database suggestions?

    i need to organize some pdfs and other file types such as image files in a database. i would like to do this in something that may let me /edit/ metadata but also in a way that lets me /export/ the pdf or other files so that the file that gets export

  • Bapi or FM to Create Or Update multidimensional freight conditions for different scale id for different conditions

    Hello all, I want to create or update multidimensional freight conditions for different scale id for different conditions. I cound not find any Bapi or FM for this. I uesd BDC for this but i can not rely on it. If any one have other solution than IDO