DRDT: breakpoint on data race?

Hi,
I just started to try DRDT, and it looks very promising. I understand that things like missing stack traces are due to the beta status of the tool. I am also a bit concerned about the performance: I used it with an application which is about to be transformed from single threaded to multithreaded, and I get about 800 data races. This needs, on a Blade-1500, more than five minutes until the list of races is displayed in rdt.
It is immediately apparent, when looking at these data race stack traces, that it would be VERY useful to have a feature like "breakpoint on data race" in dbx! That would probably mean that the event mechanism in dbx would have to be extended accordingly, and that this would only work with instrumented executables, but this could be an invaluable feature.
Purify has something like that with memory errors: they invoke a function "purify_stop_here" on errors they detect, and developers can set breakpoints on this function. This is extremely useful.
Regards
Dieter Ruppert

As a workaround, if you can see the stack traces reported by DRDT, then you can
use some of the advanced breakpoint features in dbx to stop at exactly that point.
If you want to stop on entry to "foo" but only when foo is called from bar, you can
use "stop in foo -in bar" (note the dash in the "-in").
Your request sounds like a good RFE.

Similar Messages

  • What is the gcc/g++ compiler option for data race?

    Is data race detaction supported on Linux using gcc/g++?
    It seems it is only supported on Solaris using -xinstrument=datarace option in "CC".
    Thanks,
    Ethan

    EthanWan wrote:
    Is data race detaction supported on Linux using gcc/g++?This is not the best place to ask about GNU compilers options this forum being about Sun Studio compilers. There are plenty of places devoted specifically to gcc/g++.

  • Data Race

    Hi,
    I am having data race (race condition) issue in the following code (pseudo code):
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    public class MainClass implements Runnable{
    Vector queue = new Vector();
    public MainClass{
    createInnerClass();
    private void createInnerClass{
    Thread th = new Thread(new InnerClass());
    th.start();
    public void stop(){
    Loop(queue){ <<<  DATA RACE DETECTED
    queue.remove();
    public InnerClass implements{
    public void run(){
    queue = new Vector(); <<<< DATA RACE DETECTED
    Loop(queue){
    queue.add(Object);
    So I have changed the code as follows:
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    public class MainClass implements {
    Vector queue = new Vector();
    public MainClass{
    createInnerClass();
    private void createInnerClass{
    Thread th = new Thread(new InnerClass());
    th.start();
    public void stop(){
    synchronized(queue){ <<< Put a monitor on queue
    Loop(queue){ <<<  DATA RACE IS STILL OCCURING
    queue.remove();
    public InnerClass implements{
    public void run(){
    synchronized(queue){ <<< Put a monitor on queue
    queue = new Vector(); <<<< DATA RACE IS STILL OCCURING
    Loope(queue){
    queue.add(Object);
    I have put a monitor on queue variable but still the data race is still occurring (I am using JProbe ThreadAnalyzer to analyze the program).
    Suggestion will be greatly admired.
    With Regards
    Duke Biswas

    Thank you so much for you reply.
    I am little bit confused when you mentioned that I
    have put sync on object not on variable
    instance. I don't want to goto Java basic, but my
    understanding of object and variable is
    very much same. No. The object is a chunk of memory, and the variable is your "handle" to it.
    Object o1 = new Object();
    Object o2 = new Object();
    Object o3 = o2;
                  Object #1
    +----+      +----------+
    | o1 | ---> |          |
    +----+      +----------+
                  Object #2
    +----+      +----------+
    | o2 | ---> |          |
    +----+      +----------+
                 ^
    +----+       |
    | o3 | ------+
    +----+When you do synchronized(o3); you're synchronizing on the object that o3 points to--in this case, Object #2. If you then say o3 = new Object(); you'll end up with this picture.               Object #1
    +----+      +----------+
    | o1 | ---> |          |
    +----+      +----------+
                  Object #2
    +----+      +----------+
    | o2 | ---> |          |
    +----+      +----------+
                  Object #3
    +----+      +----------+
    | o3 | ---> |          |
    +----+      +----------+ but you're still sycned on Object #2. The reference (the variable) is only used as a way to find the object at the point where the word "synchronized" appears[i]. Pointing the reference (variable) at different objects does not cause the lock to follow the reference.
    public class MainClass implements Runnable{
    Vector queue = new Vector();
    public InnerClass implements Runnable{
    public void run(){
    synchronized(queue){ >>>> SET A MONITOR2
    queue = new Vector(); >>>> DATA RACE IS
    DATA RACE IS STILL OCCURING
    Loop{
    queue.add(c);
    }          You'd have to switch the order of assigning to queue and syncing on it: queue = new Vector();
    synchronized(queue) {
       ... However that code looks weird anyway. You intialize the queue member variable in the constructor, and then you assign a new Vector to it in the run method. I don't think you want to do that.
    I suspect there are other problems too, but I can't pinpoint anything at the moment.

  • EEO Data -- Race & Gender ( E-Recruiting)

    My customer want to collect Race and Gender on a couple of screens in E-Recruiting (Rather than just in EEO Questionnaire)
    Where is Race and Gender stored for candidates (internal / external) in E-Recruiting.

    Hi Vishal,
    There is no standard field available but this requirement can be handled through an enhancement in the specific BSP page. In your requirement it will be personal details tab in candidate profile wizard and it would also necessiate to enhance infotype HRP5102 - candidate information with this field.
    You can also refer the blog below contributed by Hemendra Singh Manral to understand how to go about doing this enhancement in the BSP.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/other-topics/e-recruitment%20adding%20additional%20custom%20fields%20to%20requisition%20maintenance.pdf
    Hope this information helps.
    Best Regards
    G Raj

  • Data race detection tool

    Is there an easy way to inquire whether any error has been detected?
    It would be nice if in a makefile this would be possible:
    rdt -check race.er
    echo $?
    regards,
    Dieter

    Yes, that's would I just programmed.
    So I am answering my own stupid question with this little script called check_races
    #!/bin/ksh
    rc=$(echo races | er_print $1 2> /dev/null | grep 'Total Races: 0' > /dev/null 2>&1 )
    return $rc
    and then in the Makefile:
    check_races races.er || rdt races.er
    Thanks
    Dieter

  • Rac to non rac standby file name convert option

    Hi,
    Rac to non rac with ASM both, please find the below details.
    Primary Setup.
    diskgroup NAME for all files for both primary and standby +DATA
    pmon,unique and service name at standby-rac1, rac2
    db version -11.2.0.1
    platform - HP -UX
    $ ps -ef | grep pmon
    oracle11 22329 1 0 22:33:55 ? 0:28 ora_pmon_rac1
    oragrid 23522 1 0 Jan 2 ? 1:30 asm_pmon_+ASM1
    $ ps -ef | grep pmon
    oracle11 22329 1 0 22:33:55 ? 0:28 ora_pmon_rac2
    oragrid 23522 1 0 Jan 2 ? 1:30 asm_pmon_+ASM2
    database and Unique name - RAC, so all files on ASM created under +DATA/RAC
    Stanby setup
    diskgroup NAME for all files for both primary and standby +DATA
    pmon,unique and service name at standby- racdr
    $ ps -ef | grep pmon
    oracle11 22329 1 0 22:33:55 ? 0:28 ora_pmon_racdr
    oragrid 23522 1 0 Jan 2 ? 1:30 asm_pmon_+ASM
    database name and unique name is racdr, so all files on ASM created under DATA/racdr when cloned with RMAN from primary backup. (by settling log_file and db_file covert from DATA/RAC to +DATA/racdr )
    so now everything ok... but how do we create a new datafiles at primary for the above setting success, ??? and reflect on standby ???.
    1.create tablespace tbs size 10g;
    2.create tablespace tbs datafile '+/DATA/RAC/test.dbf size 10g; ??
    3.create tablespace tbs datafile '+/DATA size 10g;???
    4. directories name on asm and specifying on the convert options is case sensitive ???
    Thanks in advance...

    thats fine john,
    log_file and db_file covert from '+DATA/RAC', '+DATA/racdr'
    my primary is rac
    my standby us racdr
    so i mentioned above folders on convert option, so if i created datafile with or without specifying a diskgroup like mentione below, with or without specifying the directory like below
    will all create on under '+DATA/racdr' diskgroup ??
    create tablespace test DATA size10m; --> not specidying a full path as DATA/RAC' so will it go to DATA/racdr' ?? or creating under DATA ?diskgroup on standby????
    Thanks,

  • Query performance is slow in RAC

    Hi,
    I am analyzing the purpose of Oracle RAC and how it will fit/useful into our product. So i have setup two nodes RAC 10g in our lab and i am doing various testing with RAC.
    Test 1 : Fail-over:
    ~~~~~~~~~~~
    First i have started with fail-over testing, done two types of testing "connect-time" failover and "TAF".
    Here TAF has some limitation, it's not handle DML transactions.
    Test 2 : Performance:
    ~~~~~~~~~~~~~~
    Second, i have done performance testing. Here i have taken 10,000 records for insert, update, read and delete operations with singe and two node instances. Here there is no performance difference with single and two nodes.
    But i am assumed, RAC will provide high performance than single instance oracle.
    So i am in confusion, whether we can choose Oracle RAC to our project.
    DBAs,
    Please give me your answers for my following questions, it will be great helpful for me to come to conclusion:
    1. What is the main purpose of RAC (because in my assumption, failover is partially supported and no difference in performance of query processing) ?
    2. What kind of business enviroment RAC will perfectly fit ?
    3. What is the unique benefits in RAC, which is not in single instance Oracle?
    Thanks
    Edited by: Anandhan on Aug 7, 2009 1:40 AM

    Hi !
    Well RAc ensures High Availibility. With Conditions Apply !
    For the database create more than 1 service and have applications connected to the database using these services created.
    RAC is service driven to access the database. So if plannned thoughtfully, Load on the database can be distributed physically using the services created for the database.
    SO if you have a single database servicing more than one application( of any type(s) ie oltp/warehouse etc.) connect to the database using different services so that the Init parameters are set for the purpose of the connection.
    NOTE: each database instance running on node can have different Init_sid.ora to ensure optimum perfromance for the designated purpose.
    RAC uses CSS with cache fusion for reducing I/O on a running production server by transferring the buffers from the Global cache to nodes when required thus reducing the Physical Reads. This is contribution to the perfromance front.
    Any database that requires access with different init.oa for the same physical data; RAC is the best way!
    For High Avail. use TAF type service.

  • Bugs in DRDT tutorial

    Section 6.2.1 of the tutorial says:
    18 if (!pflag)
    19 continue;
    20 if (v % i == 0) {
    21 pflag[v] = 0;
    22 return 0;
    23 }
    The Data-Race Detection Tool reports that there is a data-race between the Write to pflag[] on line 21 and the Read of pflag[] on line 18. However, this data-race is benign as it does not affect the correctness of the final result. At line 18, a thread checks whether pflag[i], for a given value of i is equal to 0. If pflag[i] is equal to 0, then the thread continues on to the next value of i. If pflag[] is not equal to 0 and v is divisible by i, then the thread writes the value 0 to pflag[i]. It does not matter if, from a correctness point of view, multiple threads check the same pflag[i] and write to it concurrently, since the only value that is written to pflag[i] is 0.
    Looking closely at the text, the reference to pflag[] should be pflag[i], and the last three references to pflag[i] should really be to pflag[v]. No, wait, that's not right, either. pflag[i] is not both checked and written. pflag[i] is checked, and pflag[v] is written. The paragraph itself needs to be re-written (perhaps it was edited concurrently by multiple authors? :-) ).
    Also, Section 6.2 says there are two examples below, when in fact there are three.
    Also, Section 6.2.2 says:
    20 volatile int is_bad = 0;
    106 int i;
    107 for (i=my_start(thread_id); i<my_start(thread_id); i++) {
    108 if (is_bad)
    109 return;
    110 else {
    111 if (is_bad_element(data_array[i])) {
    112 is_bad = 1;
    113 return;
    114 }
    115 }
    116 }
    There is a data-race between the Read of is_bad on line 108 and the Write of is_bad on line 112. However, the data-race does not affect the correctness of the final result.
    But no, that's not really why there's no bad data race. The real reason is because the loop condition will never cause the body of the loop to be executed, if (as one would expect) mystart() always returns the same result given the same parameter. Also, once that is fixed, shouldn't you be mentioning that the apparent benign character of this race depends on the difference between the two values (0 and 1) being only one bit, and therefore one need not worry about the atomicity of the write? That is, if you had a 32-bit integer, and the initial value had the low bit set and the final value had the high bit set (with the value test correspondingly adjusted), and if the machine architecture allowed a 32-bit integer to be written at the hardware level in two 16-bit chunks, you'd still have to worry about a race condition potentially making invalid values appear in the shared is_bad variable.
    Also, I find it rather astounding that the third example is of double-checked locking, especially without any mention of the history of and problems with this idea (http://en.wikipedia.org/wiki/Double-checked_locking and http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf).
    Finally, in the usage flow diagram in Section 5.4, under "L1: Perform a data-race detection experiment:" you should mention the use of processor sets: http://developers.sun.com/solaris/articles/solaris_processor.html

    Herteg,
    Thank you very much for your detail review and good suggestions.
    The upper bound of the loop in section 6.2.2 should be my_end(thread_id).
    We will update the document.
    Thanks!
    -- Yuan

  • DB creation issues in RAC environment

    Hi All,
    I Installed Oracle 11g R2 RAC on VM.
    I followed steps mentioned in below
    [http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVMwareServer2.php|http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVMwareServer2.php]
    At the time of DB creation in one node DB creation successful in another node DB creation failed.
    I reinstalled everything fresh but still I am facing issue.
    Thanks in advance.
    Your help really appreciated.
    ALTER DATABASE MOUNT /* db agent *//* {1:4386:799} */
    Mon Sep 12 18:02:14 2011
    NOTE: Loaded library: /opt/oracle/extapi/64/asm/orcl/1/libasm.so
    NOTE: Loaded library: System
    Mon Sep 12 18:02:14 2011
    SUCCESS: diskgroup DATA was mounted
    Mon Sep 12 18:02:20 2011
    NOTE: dependency between database RAC and diskgroup resource ora.DATA.dg is established
    Mon Sep 12 18:02:29 2011
    Errors in file /u01/app/oracle/diag/rdbms/rac/RAC2/trace/RAC2_ckpt_20486.trc (incident=153):
    ORA-00227: corrupt block detected in control file: (block 1, # blocks 1)
    ORA-00202: control file: '+DATA/rac/controlfile/current.260.761679717'
    Incident details in: /u01/app/oracle/diag/rdbms/rac/RAC2/incident/incdir_153/RAC2_ckpt_20486_i153.trc
    Mon Sep 12 18:02:34 2011
    SUCCESS: diskgroup DATA was dismounted
    Mon Sep 12 18:02:36 2011
    Dumping diagnostic data in directory=[cdmp_20110912180236], requested by (instance=2, osid=20486 (CKPT)), summary=[incident=153].
    ORA-00227: corrupt block detected in control file: (block 1, # blocks 1)
    ORA-00202: control file: '+DATA/rac/controlfile/current.260.761679717'
    Mon Sep 12 18:02:38 2011
    ORA-205 signalled during: ALTER DATABASE MOUNT /* db agent *//* {1:4386:799} */...
    Mon Sep 12 18:02:42 2011
    Shutting down instance (abort)
    License high water mark = 2
    USER (ospid: 20676): terminating the instance
    Mon Sep 12 18:02:43 2011
    ORA-1092 : opitsk aborting process
    Instance terminated by USER, pid = 20676
    Mon Sep 12 18:02:44 2011
    Instance shutdown complete

    Thanks for response.
    I recreate DB but still same with node2 i.e block corrupted.
    ORA-01092: ORACLE instance terminated. Disconnection forced
    ORA-00704: bootstrap process failure
    ORA-01578: ORACLE data block corrupted (file # 1, block # 520)
    ORA-01110: data file 1: '+DATA/XXXXX/datafile/system.269.761698357'
    Process ID: 28860
    Session ID: 34 Serial number: 3
    Just want know am I missing any parameter or ASM lib which is impacting data file corruption.
    2VM OEL5U7 , 11g R2 11.2.0.2.
    Thanks in advance.
    Edited by: 884629 on Sep 13, 2011 12:19 AM

  • PRCR-1079 Failed to start resource ora.rac.db - during installation

    Hi
    After successful installation of Grid Infrasturcture I proceeded with database installation on clusterware and at the stage when the installer was creating clone database I got the following errors ( this was my 2nd attempt and I got the same errors both the time ) :
    Errors:
    PRCR-1079 : Failed to start resource ora.rac.db
    ORA-01092 : ORACLE instance terminated. Disconnection forced
    ORA-00704 : bootstrap process failure
    ORA-00604 : error occurred at resursiive SQL level 2
    ORA-01578 : ORACLE data block corrupted (file # 1, block # 5505)
    ORA-01110 : data file 1:'+DATA/rac/datafile/system.256.799676855'
    Process ID : 23498
    Session ID : 63 Serial number 3
    CRS-2674 Start of 'ora.rac.db' on 'rac2' failed
    CRS-2632 There are no more servers to try to place resource 'ora.rac.db' on that would satisfy its placement policy
    There are no logs on that node (rac2)
    I am running Oracle Linux 5.4 64 bit
    As mentioned above this was my 2nd attempt afresh and I got the same errors both the times, please let me know what is the problem as the rac2 is replica of rac2 in VMWare.
    Thanks for your help
    Rgds
    T

    Hi
    I tried again for the 3rd time and go the same error again, this time I rebuilt the node 2 - Can someone ple ase help me with this issue why it keeps failing on node 2 at the same stage for the 2rd time in a row.
    Also please help me clone the database manually from node 1 to node 2 so I don't have to try to reinstall it again, there must be ways to do it
    Thanks for your help in advance
    Rgds
    T

  • Questions on V$BH

    I learnt that V$BH can tell us contents of buffer cache. I looked up 11.2 document b28329. It saysV$BH displays the status and number of pings for every buffer in the SGA. This is a Real Application Clusters view.Questions:
    1) Is v$BH not good for single instance, since it is a Real Application Clusters view? I did see the view in non-RAC and query it returning lots of rows. Are the result meaningful
    2) The phrase in Oracle Doc is hard to understand as usual. Does 'number of pings for every buffer' means those buffer cache blocks that are pinned in SGA, or just every block of buffer cache?
    I used ALTER SYSTEM FLUSH SHARED_POOL to clear buffer cache in a single instance, and check buffer cache before and after the clearing buffer with queryselect o.owner,o.OBJECT_TYPE, substr(o.OBJECT_NAME,1,20) objname , b.objd , b.status,
      count(b.objd) from  v$bh b,dba_objects o where b.objd = o.data_object_id
      where owner='BISTG'   group by o.owner,o.object_type,o.object_name,b.objd, b.status ;The result are identical before and after. have the buffer cache really been cleared?

    user13148231 wrote:
    I learnt that V$BH can tell us contents of buffer cache. I looked up 11.2 document b28329. It saysV$BH displays the status and number of pings for every buffer in the SGA. This is a Real Application Clusters view.
    That's a bit of documenation that is about 10 years out of date - RAC isn't supposed to "ping" (although it does) it's supposed to use "cache fusion". A "ping" is the term Oracle uses to label the action of one instance writing a block to disc for another to read. The modern term is "fusion write".
    1) Is v$BH not good for single instance, since it is a Real Application Clusters view? I did see the view in non-RAC and query it returning lots of rows. Are the result meaningfulv$bh is not relevant to just RAC systems, it is an outer join between x$bh (the buffer headers - which are always relevant) and x$le (the lock elements - which are relevant only to RAC) - being an outer join it supplies information that can be used in single instance.
    2) The phrase in Oracle Doc is hard to understand as usual. Does 'number of pings for every buffer' means those buffer cache blocks that are pinned in SGA, or just every block of buffer cache?
    See above - but I don't think I'd look at v$bh (even in a RAC system) for pings and false pings without first checking whether Oracle has fixed the code that reports them.
    I used ALTER SYSTEM FLUSH SHARED_POOL to clear buffer cache in a single instance, and check buffer cache before and after the clearing buffer with queryselect o.owner,o.OBJECT_TYPE, substr(o.OBJECT_NAME,1,20) objname , b.objd , b.status,
    count(b.objd) from  v$bh b,dba_objects o where b.objd = o.data_object_id
    where owner='BISTG'   group by o.owner,o.object_type,o.object_name,b.objd, b.status ;The result are identical before and after. have the buffer cache really been cleared?I hope you meant: alter system flush buffer_cache; but even if you did the numbers wouldn't change. "Flushing" the bufer cache doesn't really empty out the memory, it simply relinks the buffer headers to a different list, and sets the status to "free". Amongst other things that means it leaves the objd in place.
    Regards
    Jonathan Lewis

  • How to generate a query involving multiple tables(one left join others)

    Hi, all,
    I want to query a db like these:
    I need all the demographics information(from table demo) and their acr info(from table acr), and their clinical info(from table clinical), and their lab info(from table lab).
    The db is like this:
    demo->acr: one to many
    demo->clinical info: one to many
    demo->lab info: one to many
    I want to get one query result which are demo left join acr, and demo left join clinical, and demo left join lab. I hope the result is a record including demo info, acr info, clinical info, and lab info.
    How could I do this in SQL?
    Thanks a lot!
    Qian

    Thank you very, very much!
    Actually, I need a huge query to include all the tables in our db.
    We are running a clinical db which collects the patients demographics info, clinical info, lab info, and many other information.
    The Demographics table is a center hub which connects other tables. This is the main architecture.
    My boss needed a huge query to include all the information, so others could find what they need by filtering.
    As you have found, because one patients usually has multiple clinical/lab info sets, so the result will be multiplied! the number of result=n*m*k*...
    My first plan is to set time point criteria to narrow all the records with one study year. If somebody needs to compare them, then I have to show them all.
    So I have to know the SQL to generate a huge query including as many tables as possible.
    I show some details here:
    CREATE TABLE "IMMUNODATA"."DEMOGRAPHICS" (
    "SUBJECTID" INTEGER NOT NULL,
    "WORKID" INTEGER,
    "OMRFHISTORYNUMBER" INTEGER,
    "OTHERID" INTEGER,
    "BARCODE" INTEGER,
    "GENDER" VARCHAR2(1),
    "DOB" DATE,
    "RACEAI" INTEGER,
    "RACECAUCASIAN" INTEGER,
    "RACEAA" INTEGER,
    "RACEASIAN" INTEGER,
    "RACEPAC" INTEGER,
    "RACEHIS" INTEGER,
    "RACEOTHER" VARCHAR2(50),
    "SSN" VARCHAR2(11),
    PRIMARY KEY("SUBJECTID") VALIDATE
    CREATE TABLE "IMMUNODATA"."ACR" (
    "ID" INTEGER NOT NULL,
    "THEDATE" DATE ,
    "SUBJECTID" INTEGER NOT NULL,
    "ACR_PAGENOTCOMPLETED" VARCHAR2(1000) ,
    "ACR_MALARRASHTODAY" INTEGER ,
    "ACR_MALARRASHEVER" INTEGER ,
    "ACR_MALARRSHEARLIESTDATE" DATE ,
    PRIMARY KEY("ID") VALIDATE,
    FOREIGN KEY("SUBJECTID") REFERENCES "IMMUNODATA"."DEMOGRAPHICS" ("SUBJECTID") VALIDATE
    CREATE TABLE "IMMUNODATA"."CLIN" (
    "ID" INTEGER NOT NULL,
    "THEDATE" DATE ,
    "SUBJECTID" INTEGER NOT NULL,
    "CLIN_PAGENOTCOMPLETED" VARCHAR2(1000) ,
    "CLIN_FATIGUE" VARCHAR2(20) ,
    "CLIN_FATIGUEDATE" DATE ,
    "CLIN_FEVER" VARCHAR2(20) ,
    "CLIN_FEVERDATE" DATE ,
    "CLIN_WEIGHTLOSS" VARCHAR2(20) ,
    "CLIN_WEIGHTLOSSDATE" DATE ,
    "CLIN_CARDIOMEGALY" VARCHAR2(20) ,
    PRIMARY KEY("ID") VALIDATE,
    FOREIGN KEY("SUBJECTID") REFERENCES "IMMUNODATA"."DEMOGRAPHICS" ("SUBJECTID") VALIDATE
    Other tables are alike.
    Thank very much!
    Qian

  • Could not show multiple records while could show only one record

    Hi, all
    I have an oracle 10g db running on a Linux E3 server.
    I have two tables:
    CREATE TABLE "IMMUNODATA"."DEMOGRAPHICS" (
    "SUBJECTID" INTEGER NOT NULL,
    "WORKID" INTEGER,
    "OMRFHISTORYNUMBER" INTEGER,
    "OTHERID" INTEGER,
    "BARCODE" INTEGER,
    "GENDER" VARCHAR2(1),
    "DOB" DATE,
    "RACEAI" INTEGER,
    "RACECAUCASIAN" INTEGER,
    "RACEAA" INTEGER,
    "RACEASIAN" INTEGER,
    "RACEPAC" INTEGER,
    "RACEHIS" INTEGER,
    "RACEOTHER" VARCHAR2(50),
    "SSN" VARCHAR2(11),
    PRIMARY KEY("SUBJECTID") VALIDATE
    CREATE TABLE "IMMUNODATA"."MEDICATION" (
    "ID" INTEGER NOT NULL ,
    "THEDATE" DATE ,
    "SUBJECTID" INTEGER NOT NULL,
    "MED_PAGENOTCOMPLETED" VARCHAR2(500) ,
    "MEDICATION_NAME" VARCHAR2(100),
    "MEDICATION_CLASSIFICATION" VARCHAR2(100),
    "MEDICATION_DOSENUM" VARCHAR2(50),
    "MEDICATION_DOSEMEASURE" VARCHAR2(100),
    "MEDICATION_ROUTE" VARCHAR2(100),
    "MEDICATION_FREQ" VARCHAR2(100),
    "MEDICATION_BEGIN" DATE,
    "MEDICATION_END" DATE,
    "BARCODE" INTEGER,
    "DATASOURCE" VARCHAR2(50),
    "NOCHANGE" INTEGER,
    PRIMARY KEY("ID") VALIDATE,
    FOREIGN KEY("SUBJECTID") REFERENCES "IMMUNODATA"."DEMOGRAPHICS" ("SUBJECTID") VALIDATE
    I want to show an output to combine all medication records of one person into one, and I created a function.
    CREATE OR REPLACE FUNCTION COMMEDICATION(p_subjectid IN immunodata.medication.subjectid%TYPE ) RETURN VARCHAR2 IS
    v_medication VARCHAR2(1000);
    BEGIN
    FOR c IN (SELECT THEDATE, MED_PAGENOTCOMPLETED, MEDICATION_NAME, MEDICATION_CLASSIFICATION, MEDICATION_DOSENUM, MEDICATION_DOSEMEASURE,MEDICATION_ROUTE,MEDICATION_FREQ,MEDICATION_BEGIN,MEDICATION_END,DATASOURCE,NOCHANGE FROM immunodata.medication WHERE subjectid = p_subjectid)
    LOOP
    IF v_medication IS NULL THEN
    v_medication := c.THEDATE||' '||c.MED_PAGENOTCOMPLETED||' '||c.MEDICATION_NAME||' '||c.MEDICATION_CLASSIFICATION||' '||c.MEDICATION_DOSENUM||' '||c.MEDICATION_DOSEMEASURE||' '||c.MEDICATION_ROUTE||' '||c.MEDICATION_FREQ||' '||c.MEDICATION_BEGIN||' '||c.MEDICATION_END||' '||c.DATASOURCE||' '||c.NOCHANGE;
    ELSE
    v_medication := v_medication||','||c.THEDATE||' '||c.MED_PAGENOTCOMPLETED||' '||c.MEDICATION_NAME||' '||c.MEDICATION_CLASSIFICATION||' '||c.MEDICATION_DOSENUM||' '||c.MEDICATION_DOSEMEASURE||' '||c.MEDICATION_ROUTE||' '||c.MEDICATION_FREQ||' '||c.MEDICATION_BEGIN||' '||c.MEDICATION_END||' '||c.DATASOURCE||' '||c.NOCHANGE;
    END IF;
    END LOOP;
    RETURN v_medication;
    END;
    and I performed this selection statement:
    SQL> select subjectid, barcode, COMmedication(subjectid) from immunodata.demographics where barcode=500135;
    SUBJECTID BARCODE
    COMMEDICATION(SUBJECTID)
    33 500135
    15-SEP-00 Cyclophosphamide Immunosuppresant .7 MG IV MONTLY FORM1 ,15-SEP-00 Hydroxychloroquine (Plaquenil) Immunosuppresant 400 MG DAILY FORM1
    It is exactly what I need, so I want to show all records in the tables.
    SQL> select subjectid, barcode, COMmedication(subjectid) from immunodata.demographics;
    ERROR:
    ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    ORA-06512: at "SYS.COMMEDICATION", line 9
    no rows selected
    It seems that one record could be shown, but multiple could not.
    Is there anything wrong with my code or other things?
    Thanks!
    Qian

    It seems that one record could be shown, but multiple could not.
    Is there anything wrong with my code or other things?
    It means that there is at least one subjectid in your table for which the value of v_medication in the function exceeds 1000 characters.
    You may want to increase the size of v_medication to 4000, which would be the upper limit for the function's return value.
    pratz

  • Unix shell: Environment variable works for file system but not for ASM path

    We would like to switch from file system to ASM for data files of Oracle tablespaces. For the path of the data files, we have so far used environment variables, e.g.,
    CREATE TABLESPACE BMA DATAFILE '${ORACLE_DB_DATA}/bma.dbf' SIZE 2M AUTOEXTEND ON;
    This works just fine (from shell scripts, PL/SQL packages, etc.) if ORACLE_DB_DATA denotes a file system path, such as "/home/oracle", but doesn’t work if the environment variable denotes an ASM path like "\+DATA/rac/datafile". I assume that it has something to do with "+" being a special character in the shell. However, escaping "\+" didn’t work. I tried with both bash and ksh.
    Oracle managed files (e.g., set DB_CREATE_FILE_DEST to +DATA/rac/datafile) would be an option. However, this would require changing quite a few scripts and programs. Therefore, I am looking for a solution with the environment variable. Any suggestions?
    The example below is on a RAC Attack system (http://en.wikibooks.org/wiki/RAC_Attack_-OracleCluster_Database_at_Home). I get the same issues on Solaris/AIX/HP-UX on 11.2.0.3 also.
    Thanks,
    Martin
    ==== WORKS JUST FINE WITH ORACLE_DB_DATA DENOTING FILE SYSTEM PATH ====
    collabn1:/home/oracle[RAC1]$ export ORACLE_DB_DATA=/home/oracle
    collabn1:/home/oracle[RAC1]$ sqlplus "/ as sysdba"
    SQL*Plus: Release 11.2.0.1.0 Production on Fri Aug 24 20:57:09 2012
    Copyright (c) 1982, 2009, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options
    SQL> CREATE TABLESPACE BMA DATAFILE '${ORACLE_DB_DATA}/bma.dbf' SIZE 2M AUTOEXTEND ON;
    Tablespace created.
    SQL> !ls -l ${ORACLE_DB_DATA}/bma.dbf
    -rw-r----- 1 oracle asmadmin 2105344 Aug 24 20:57 /home/oracle/bma.dbf
    SQL> drop tablespace bma including contents and datafiles;
    ==== DOESN’T WORK WITH ORACLE_DB_DATA DENOTING ASM PATH ====
    collabn1:/home/oracle[RAC1]$ export ORACLE_DB_DATA="+DATA/rac/datafile"
    collabn1:/home/oracle[RAC1]$ sqlplus "/ as sysdba"
    SQL*Plus: Release 11.2.0.1.0 Production on Fri Aug 24 21:08:47 2012
    Copyright (c) 1982, 2009, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options
    SQL> CREATE TABLESPACE BMA DATAFILE '${ORACLE_DB_DATA}/bma.dbf' SIZE 2M AUTOEXTEND ON;
    CREATE TABLESPACE BMA DATAFILE '${ORACLE_DB_DATA}/bma.dbf' SIZE 2M AUTOEXTEND ON
    ERROR at line 1:
    ORA-01119: error in creating database file '${ORACLE_DB_DATA}/bma.dbf'
    ORA-27040: file create error, unable to create file
    Linux Error: 2: No such file or directory
    SQL> -- works if I substitute manually
    SQL> CREATE TABLESPACE BMA DATAFILE '+DATA/rac/datafile/bma.dbf' SIZE 2M AUTOEXTEND ON;
    Tablespace created.
    SQL> drop tablespace bma including contents and datafiles;

    My revised understanding is that it is not a shell issue with replacing +, but an Oracle problem. It appears that Oracle first checks whether the path starts with a "+" or not. If it does not (file system), it performs the normal environment variable resolution. If it does start with a "+" (ASM case), Oracle does not perform environment variable resolution. Escaping, such as "\+" instead of "+" doesn't work either.
    To be more specific regarding my use case: I need the substitution to work from SQL*Plus scripts started with @script, PL/SQL packages with execute immediate, and optionally entered interactively in SQL*Plus.
    Thanks,
    Martin

  • Archive logs are missing in hot backup

    Hi All,
    We are using the following commands to take hot backup of our database. Hot backup is fired by "backup" user on Linux system.
    =======================
    rman target / nocatalog <<EOF
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '$backup_dir/$date/%F';
    run {
    allocate channel oem_backup_disk1 type disk format '$backup_dir/$date/%U';
    #--Switch archive logs for all threads
    sql 'alter system archive log current';
    backup as COMPRESSED BACKUPSET database include current controlfile;
    #--Switch archive logs for all threads
    sql 'alter system archive log current';
    #--Backup Archive logs and delete what we've backedup
    backup as COMPRESSED BACKUPSET archivelog all not backed up delete all input;
    release channel oem_backup_disk1;
    allocate channel for maintenance type disk;
    delete noprompt obsolete device type disk;
    release channel;
    exit
    EOF
    =======================
    Due to which after command (used 2 times) "sql 'alter system archive log current';" I see the following lines in alert log 2 times. Because of this all the online logs are not getting archived (Missing 2 logs per day), the backup taken is unusable when restoring. I am worried about this. I there any to avoid this situation.
    =======================
    Errors in file /u01/oracle/admin/rac/udump/rac1_ora_3546.trc:
    ORA-19504: failed to create file "+DATA/rac/1_32309_632680691.dbf"
    ORA-17502: ksfdcre:4 Failed to create file +DATA/rac/1_32309_632680691.dbf
    ORA-15055: unable to connect to ASM instance
    ORA-01031: insufficient privileges
    =======================
    Regards,
    Kunal.

    All thanks you for help, pleas find additional information. I goth the following error as log sequence was missing. Everyday during hotbackup, there are 2 missing archive logs, which makes our backup inconsistent and useless.
    archive log filename=/mnt/xtra-backup/ora_archivelogs/1_32531_632680691.dbf thread=1 sequence=32531
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28768_632680691.dbf thread=2 sequence=28768
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28769_632680691.dbf thread=2 sequence=28769
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28770_632680691.dbf thread=2 sequence=28770
    archive log filename=/mnt/xtra-backup/ora_archivelogs/1_32532_632680691.dbf thread=1 sequence=32532
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28771_632680691.dbf thread=2 sequence=28771
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf thread=2 sequence=28772
    archive log filename=/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf thread=2 sequence=28773
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 12/13/2012 04:22:56
    RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf'
    ORA-00310: archived log contains sequence 28772; sequence 28773 required
    ORA-00334: archived log: '/mnt/xtra-backup/ora_archivelogs/2_28772_632680691.dbf'
    Let me try the susggestions provided above.

Maybe you are looking for