Data Log File Refnum Type Def Bug??

Hello,
I just found some quirky behaviour (LV 7.1.1):
1. In the attached LLB, open "RefnumVI.vi"
2. Select the Data Log File Refnum control and open it for editing (Edit - Customize Control ... from the menu)
3. Close "RefnumVI.vi" but leave "Refnum.ctl" open
4. Select the enum inside the refnum container, and open it
5. Select File - Save As ... and save the enum as "RefnumEnum2.ctl"
6. Close the enum
7. Save "Refnum.ctl", and close it
8. Reopen "RefnumVI.vi" and display its hierarchy (Browse - Show VI Hierarchy from the menu)
Notice that "RefnumVI.vi" still has a link to "RefnumEnum.ctl", even though we saved this as "RefnumEnum2.ctl" earlier.
If you go back to the VI, right click on the refnum, and replace it with itself (i.e. select "Refnum.ctl"), the link disappears.
This behaviour does not happen if I use a Cluster instead of a Data Log File Refnum.  I imagine the difference is that the calling VI needs to know about the structure of the data log file in ways it doesn't need to know about the structure of a cluster, but this still is very counter-intuitive behaviour.  Is this really expected?  Or is it a bug?  Is there any other way to remove the link?
Cheers,
Jaegen
Attachments:
RefnumEnumBug.llb ‏22 KB

Nathan,
Thanks for your response - I have 8.2 and am in the process of evaluating how/when to upgrade.
Does this mean that the compiler/linker is behaving differently depending on where you open a type def from?  The reason I'm asking is that I've seen similar behavior when editing a hierarchy of type defs; depending on how I open the low-level type def I'm actually editing, changes will or won't get propagated to other instances properly.
Regarding this actual problem, the issue I had is that the data log file refnum type def exists on many VIs, and thus the incorrect link now exists on many VIs, and I don't see any way of correcting the problem without manually replacing the type def with itself in every location (given there's no "Replace All" feature in LV 7.1.1 ).  However, the hierarchy I'm dealing with was only created for testing, so I don't actually need to fix it .  I'll just know to avoid causing this problem in the first place in the future.
Jaegen

Similar Messages

  • Data log file refnum - what is it?

    Hello all,
    i want know what is it data log file refnum and how to use it.
    Thanks.

    A datalog file is a file that stores data as a sequence of records of a single, arbitrary data type that you specify when you create the file. Although all the records in a datalog file must be a single type, that type can be complex. For instance, you can specify that each record is a cluster that contains a string, a number, and an array.
    You could use a datalog file refnum if, for instance, you were creating a subVI which will be accepting a datalog file as an input. You could use this refnum to write to the file and perform other actions on it.
    J.R. Allen

  • Get Total DB size , Total DB free space , Total Data & Log File Sizes and Total Data & Log File free Sizes from a list of server

    how to get SQL server Total DB size , Total DB free space , Total Data  & Log File Sizes and Total Data  & Log File free Sizes from a list of server 

    Hi Shivanq,
    To get a list of databases, their sizes and the space available in each on the local SQL instance.
    dir SQLSERVER:\SQL\localhost\default\databases | Select Name, Size, SpaceAvailable | ft -auto
    This article is also helpful for you to get DB and Log File size information:
    Checking Database Space With PowerShell
    I hope this helps.

  • Odi 11g - IKM SQL to Hyperion Essbase (DATA) log file always empty

    In odi 11g when using *"IKM SQL to Hyperion Essbase (DATA)"* setting the the "LOG_ENABLED" = true,
    only an empty file are generated.
    Just "LOG_ERRORS" file (if errors occurs) are created.
    Is this just an my issue?
    Can someone help me?
    p.s.: the same issue, I got even with the *"IKM SQL to Hyperion Planning"*
    Thx in advance, Paolo

    Thanks John for your suggestion.
    here the patch *"Patch 10302682: IKM SQL TO PLANNING: LOG FILE IS CREATED BUT NOTHING INSIDE."*
    I didn't see any other about Essbase...
    I try to check all day on support site.
    Paolo
    Edited by: Paolo on 19-apr-2011 8.44

  • DATA Log file path change in sql

    Dear experts,
    due to non avaibility of space in disk i have move my SAPLog file in another drive, but now my SAP working fine but when i try to change path in SQL 2005 its not giving me permission to do.
    please let me know how to change log file path in sql 2005.
    & if infuture i change my SAP 1 DATA file path how we change configure new path for dabase  in SQL 2005.
    Regards,
    jitendra.

    Hi
    Kindly go thru below web links. before start the process stop the SAP instance.
    http://support.microsoft.com/kb/224071
    Set the log DB size - http://help.sap.com/saphelp_nwce72/helpdata/en/f2/31ad9b810c11d288ec0000e8200722/content.htm
    and ref the SAP Note 363018
    Regards
    Sriram

  • Need example with write to DATA log file programmatically?

    Hi all,
    I would like to request for an example of writing a datalog (*.DAT) file which could do this.
    Datalog file.
    NAME: AGE: SEX: (header)
    john 16 M (data)
    marie 55 F
    Dolah 34 M
    The program can also let me to add in new data and then store in the same file.dat. The data (hopefully) can be written to the bottom of the last saved entry of data. (eg. after the row Dolah).
    I need to do this so that I can open and read back the datalog file and use the data for other purpose.
    Please help,
    juni

    You've got some examples in Fundamentals>File Input and Output but one of the easiest things to do is to modify Write Strings to Spreadsheet File.vi to write strings as detailed at the bottom of the diagram and then save it to a different place and with a different name.
    Attachments:
    Spreadsheet_File_Write.jpg ‏17 KB

  • Setting control to default value when it is a strict type def

    If my subVI has a control that happens to be a strict type defined cluster and I have saved the values in the control as the default values, how can I change them back to the default values of the type def (which are empty).  Because it is a strict type definition, i cannot click on the elements inside the cluster and modify them (e.g. click on an array inside the cluster and do an empty array operation.)
    In the attached screenshot, the control "BLE Meters To Test" is just an array of clusters and I was able to empty the array.  The control "Capture Data" is a strict type def.  I would like to clear the contents of this control without having to delete the control and add it back with a new instance of the same strict type def.
    Solved!
    Go to Solution.
    Attachments:
    screenshot.PNG ‏33 KB

    are you talking about edit mode or run mode?
    You could temporarily add a second instance of the typedef (all empty), then change it to a constant. Whenver you need to reset at run time, write the new digram constant to a local variable of the control.
    glstill wrote:
    Because it is a strict type definition, i cannot click on the elements inside the cluster and modify them (e.g. click on an array inside the cluster and do an empty array operation.)
    Make sure to click on the array container, not on the array elements.
    LabVIEW Champion . Do more with less code and in less time .

  • Problem in creating log file on Ubuntu(Linux)

    hi
    I have developed a project in java. which creates log file for exception handling. This is working fine on Windows but When I run the jar file on ubuntu then it is not creating log file and throwing an error.
    Error Message="Permission Denied UnixFileSystem.java createFileExclusively()"
    Code is :
    strStartupPath = new java.io.File("").getAbsolutePath();
    DateFormat dateformat = new SimpleDateFormat("yyyyMMdd");
    Date date = new Date();
    String strFileName = strStartupPath + "\\" + dateformat.format(date) + ".log";
    File logFile = new File(strFileName);
    Your assistance will be appriciated.
    Sonal

    If you want to do this properly then you will have to make it OS specific. Various security systems on each platform will prevent you from freely writing your logfile unless you choose the correct location. The applications folder under MacOSX, for instance, requires Admin privileges. Under MacOSX and Windows, log files are typically stored in the Applications Data resp. /Library/Logs or ~/Library/Logs folder in MacOS X. In Unix and Linux you will typically use /var/log directory.
    I think your question is rather Java than Linux specific, but there is a lot of information available for free on the web.
    Perhaps the following example can help you to determine the OS:
    public static final class OsUtils
       private static String OS = null;
       public static String getOsName()
          if(OS == null) { OS = System.getProperty("os.name");
       public static boolean isWindows()
          return getOsName().startsWith("Windows");
       public static boolean isUnix() // and so on
    }Edited by: Dude on Apr 16, 2011 12:58 AM

  • Shrink Transaction log file - - - SAP BPC NW

    HI friends,
    We want to shrink the transaction log files in SAP BPC NW 7.0, how can we achieve thsi
    Please can you throw some light on this
    why we thought of ghrinking the file ?
    we are getting "out of memory " error when ever we do any activity. so we thought of shrinking the file (this is not a production server - FYI)
    example of an activity where the out of memeory issueee appears
    SAP BPC excel >>> etools >>> client options >>> refresh dimension members >>> this leads to a pop-up screen stating that "out of memory"
    so we thought of shrinking the file.
    Please any suggestions
    Thank you and Kindest Regards
    Srikaanth

    HI Poonam,
    Not only the excel is throwing this kind of message (out of memory) - the SAP note is helpful, if we have error in excel alone
    But we are facing this error every where
    We have also found out that our Hard disk capacity as run out of space.
    we want to empty the log files and make some space for us.
    our hard disk is now having only few MegaBytes now
    we want to clear our all test data, log files, and other stuffs
    Please can you recommed us some way
    Thank you and Kindest regards
    Srikaanth

  • No Data Change event generated for a XControl in a Type Def.

    Hello,
    Maybe I am missing something but the Data Change event of a XControl is not called when the XControl is used in a Type Def. or Strictly Type Def. (see the attached file).
    There may be some logics behind it but it eludes me. Any idea of why or any idea of a workaround still keeping the Type Def. ?
    Best Regards.
    Julian
    Windows XP SP2 & Windows Vista 64 - LV 8.5 & LV 8.5.1
    Attachments:
    XControl No Data Change event.zip ‏45 KB

    Hi TWGomez,
    Thank you for addressing this issue. It must be a XControl because it carries many implemented functions/methods (though there is none in the provided example).
    Also consider that the provided non-working example is in fact a reduction of my actual problem (in a 1000-VI large application) to the smallest relevant elements. In fact I use a  typedef of a mix of a lot of different XControls and normal controls, some of them being typedef, strictly typedef or normal controls and other XControls of XControls (of XControls...) in a object oriented like approach...
    Hi Prashant,
    I use a typedef to propagate its modifications automatically everywhere it is used in the application (a lot of places). As you imply a XControl would do the same, though one would like to construct a simple typedef when no additional functionality is wanted.
    The remark "XControl=typedef+block diagram" is actually very neat (never thought of it that way) because it provides a workaround by transforming every typedef into a XControl. Thanks very much, I will explore this solution.
    One issue remains: All normal controls update their displayed value when inserted into a typedef but not XControls. Data change event is not triggered. I known that XControls have some limitations (no array of XControls) but I would say, like TWGomez, that this is a “bug” considering the expected functionality.

  • Log file sync vs log file parallel write probably not bug 2669566

    This is a continuation of a previous thread about ‘log file sync’ and ‘log file parallel write’ events.
    Version : 9.2.0.8
    Platform : Solaris
    Application : Oracle Apps
    The number of commits per second ranges between 10 and 30.
    When querying statspack performance data the calculated average wait time on the event ‘log file sync’ is on average 10 times the wait time for the ‘log file parallel write’ event.
    Below just 2 samples where the ratio is even about 20.
    "snap_time"     " log file parallel write avg"     "log file sync avg"     "ratio
    11/05/2008 10:38:26      8,142     156,343     19.20
    11/05/2008 10:08:23     8,434     201,915     23.94
    So the wait time for a ‘log file sync’ is 10 times the wait time for a ‘log file parallel write’.
    First I thought that I was hitting bug 2669566.
    But then Jonathan Lewis is blog pointed me to Tanel Poder’s snapper tool.
    And I think that it proves that I am NOT hitting this bug.
    Below is a sample of the output for the log writer.
    -- End of snap 3
    HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC , DELTA, DELTA/SEC, HDELTA, HDELTA/SEC
    DATA, 4, 20081105 10:35:41, 30, STAT, messages sent , 1712, 57, 1.71k, 57.07
    DATA, 4, 20081105 10:35:41, 30, STAT, messages received , 866, 29, 866, 28.87
    DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts , 10, 0, 10, .33
    DATA, 4, 20081105 10:35:41, 30, STAT, redo wastage , 212820, 7094, 212.82k, 7.09k
    DATA, 4, 20081105 10:35:41, 30, STAT, redo writer latching time , 2, 0, 2, .07
    DATA, 4, 20081105 10:35:41, 30, STAT, redo writes , 867, 29, 867, 28.9
    DATA, 4, 20081105 10:35:41, 30, STAT, redo blocks written , 33805, 1127, 33.81k, 1.13k
    DATA, 4, 20081105 10:35:41, 30, STAT, redo write time , 652, 22, 652, 21.73
    DATA, 4, 20081105 10:35:41, 30, WAIT, rdbms ipc message ,23431084, 781036, 23.43s, 781.04ms
    DATA, 4, 20081105 10:35:41, 30, WAIT, log file parallel write , 6312957, 210432, 6.31s, 210.43ms
    DATA, 4, 20081105 10:35:41, 30, WAIT, LGWR wait for redo copy , 18749, 625, 18.75ms, 624.97us
    When adding the DELTA/SEC (which is in micro seconds) for the wait events it always roughly adds up to a million micro seconds.
    In the example above 781036 + 210432 = 991468 micro seconds.
    This is the case for all the snaps taken by snapper.
    So I think that the wait time for the ‘log file parallel write time’ must be more or less correct.
    So I still have the question “Why is the ‘log file sync’ about 10 times the time of the ‘log file parallel write’?”
    Any clues?

    Yes that is true!
    But that is the way I calculate the average wait time = total wait time / total waits
    So the average wait time for the event 'log file sync' per wait should be near the wait time for the 'llog file parallel write' event.
    I use the query below:
    select snap_id
    , snap_time
    , event
    , time_waited_micro
    , (time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24) corrected_wait_time_h
    , total_waits
    , (total_waits - p_total_waits)/((snap_time - p_snap_time) * 24) corrected_waits_h
    , trunc(((time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24))/((total_waits - p_total_waits)/((snap_time - p_snap_time) * 24))) average
    from (
    select sn.snap_id, sn.snap_time, se.event, se.time_waited_micro, se.total_waits,
    lag(sn.snap_id) over (partition by se.event order by sn.snap_id) p_snap_id,
    lag(sn.snap_time) over (partition by se.event order by sn.snap_time) p_snap_time,
    lag(se.time_waited_micro) over (partition by se.event order by sn.snap_id) p_time_waited_micro,
    lag(se.total_waits) over (partition by se.event order by sn.snap_id) p_total_waits,
    row_number() over (partition by event order by sn.snap_id) r
    from perfstat.stats$system_event se, perfstat.stats$snapshot sn
    where se.SNAP_ID = sn.SNAP_ID
    and se.EVENT = 'log file sync'
    order by snap_id, event
    where time_waited_micro - p_time_waited_micro > 0
    order by snap_id desc;

  • DATE fields and LOG files  in context with external tables

    I am facing two problems when dealing with the external tables feature in Oracle 9i.
    I created an External Table with some fileds with the DATE data type . There were no issues during the creation part. But when i query the table, the DATE fields are not properly selected though the data is there in the files. Is there any ideas to deal with this ?
    My next question is regarding the log files. The contents in the log file seems to be growing when querying the external tables. Is there a way to control this behaviour?
    Suggestions / Advices on the above two issues are welcome.
    Thanks
    Lakshminarayanan

    Hi
    If you have date datatypes than:
    select
    greatest(TABCASER1.CASERRECIEVEDDATE, EVCASERS.FINALEVDATES, EVCASERS.PUBLICATIONDATE, EVCASERS.PUBLICATIONDATE, TABCASER.COMPAREACCEPDATE)
    from TABCASER, TABCASER1, EVCASERS
    where ...-- join and other conditions
    1. greatest is good enough
    2. to_date creates date dataype from string with the format of format string ('mm/dd/yyyy')
    3. decode(a, b, c, d) is a function: if a = b than return c else d. NULL means that there is no data in the cell of the table.
    6. to format the date for display use to_char function with format modell as in the to_date function.
    Ott Karesz
    http://www.trendo-kft.hu

  • Occasional invalid modified date on a file - looks like a bug

    In the Bridge script I'm working on, I need to compare the last modified dates of files. I've found that when I iterate through the app.document.visibleThumbnails array that in nearly every directory I test on, there are one or two files that return an "undefined" value for app.document.visibleThumbnail[i].spec.modified. It is not the same two files each run, but it's always at least one that has a missing value. It's a pretty simple test app to illustrate the problem:
    var thumbs = app.document.visibleThumbnails;
    for (var j in thumbs)
    var modDateSpec = thumbs[j].spec.modified;
    if (modDateSpec == undefined)
    Window.alert("modification date is undefined");
    Unless this value is not intended to be reliable, this appears to be a bug which is why I am reporting it. I have worked around the problem for now by getting the fsName from the same thumbnail, creating a new File object with that fsName and getting the modified date from that temp object.
    I've verified that purging and rebuilding the cache does not make the problem go away. I see this problem on many different directories.
    Is this the right place to report this type of bug or is there a different procedure I should follow?
    --John

    pessex wrote: Anybody know?  Or know of a different way that I COULD do it in Terminal?
    Don't know about Terminal, but, because you are using 10.9.x and iMovie, try the iMovie method:

  • Data Services 4.0 Designer. Job Execution but empty log file no matter what

    Hi all,
    am running DS 4.0. When i execute my batch_job via designer, log window pops up but is blank. i.e. cannot see any trace messages.
    doesn't matter if i select "Print all trace messages" in execution properties.
    Jobserver is running on a seperate server. The only thing i have locally is just my designer.
    if i log into the Data Services management console and select the job server, i can see trace and error logs from the job. So i guess what i need is for this stuff to show up in my designer?
    Did i miss a step somewhere?
    can't find anything in docs about this.
    thanks
    Edited by: Andrew Wangsanata on May 10, 2011 11:35 AM
    Added additional detail

    awesome. Thanks Manoj
    I found the log file. in it relevant lines for last job i ran are
    (14.0) 05-11-11 16:52:27 (2272:2472) JobServer:  Starting job with command line -PLocaleUTF8 -Utip_coo_ds_admin
                                                    -P+04000000001A030100100000328DE1B2EE700DEF1C33B1277BEAF1FCECF6A9E9B1DA41488E99DA88A384001AA3A9A82E94D2D9BCD2E48FE2068E59414B12E
                                                    48A70A91BCB  -ek********  -G"70dd304a_4918_4d50_bf06_f372fdbd9bb3" -r1000 -T1073745950  -ncollect_cache_stats
                                                    -nCollectCacheSize  -ClusterLevelJOB  -Cmxxx -CaDesigner -Cjxxx -Cp3500 -CtBatch  -LocaleGV
                                                    -BOESxxx.xxx.xxx.xxx -BOEAsecLDAP -BOEUi804716
                                                    -BOEP+04000000001A0301001000003F488EB2F5A1CAB2F098F72D7ED1B05E6B7C81A482A469790953383DD1CDA2C151790E451EF8DBC5241633C1CE01864D93
                                                    72DDA4D16B46E4C6AD -Sxxx.xxx.xxx -NMicrosoft_SQL_Server -Qlocal_repo  coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e" -l"C:\Program Files (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e/trace_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -z"C:\Program Files
                                                    (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e/error_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -w"C:\Program Files
                                                    (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e/monitor_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -Dt05_11_2011_16_52_27_9
                                                    (BODI-850052)
    (14.0) 05-11-11 16:52:27 (2272:2472) JobServer:  StartJob : Job '05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3' with pid '148' is kicked off
                                                    (BODI-850048)
    (14.0) 05-11-11 16:52:28 (2272:2072) JobServer:  Sending notification to <inet:10.165.218.xxx:56511> with message type <4> (BODI-850170)
    (14.0) 05-11-11 16:52:28 (2272:2472) JobServer:  AddChangeInterest: log change interests for <05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3> from client
                                                    <inet:10.165.218.xxx:56511>. (BODI-850003)
    (14.0) 05-11-11 17:02:32 (2272:2472) JobServer:  RemoveChangeInterest: log change interests for <05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3> from client
                                                    <inet:10.165.218.xxx:56511>. (BODI-850003)
    (14.0) 05-11-11 19:57:45 (2272:2468) JobServer:  GetRunningJobs() success. (BODI-850058)
    (14.0) 05-11-11 19:57:45 (2272:2468) JobServer:  PutLastJobs Success.  (BODI-850001)
    (14.0) 05-11-11 19:57:45 (2272:2072) JobServer:  Sending notification to <inet:10.165.218.xxx:56511> with message type <5> (BODI-850170)
    (14.0) 05-11-11 19:57:45 (2272:2472) JobServer:  GetHistoricalLogStatus()  Success. 05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3 (BODI-850001)
    (14.0) 05-11-11 19:57:45 (2272:2472) JobServer:  GetHistoricalLogStatus()  Success. 05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3 (BODI-850001)
    it does not look like i have any errors with respect to connectivity? ( or any errors at all....)
    Please advise on what, if anything you notice from log file and/or next steps i can take.
    thanks.

  • I have one problem with Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

Maybe you are looking for

  • Imovie 11 and itunes library

    help. i'm new to mac and i'm trying to put together family movies and itunes music. the problem is i can't access the music library on imovie 11.  the screen on the bottom right corner is empty..all i can see is the imovie sound effects and ilife sou

  • Exporting audio with markers button is greyed out.... ANY HELP MUCH APPRECIATED!

    Hello so, background to the story. Now peak pro is defunkt on OSX I am trying to use Audition to create cdmasters. The idea being I will compile the album in audition, export wav files of each track including gaps and compile DDPi master in Sonoris D

  • Equium A100-02k - No backlight on screen

    Laptop starts up ok but after 5 minutes or so the screen flickers then goes black. Under a bright light I can still see desktop image on screen but there is no backlight. If I let it cool for 10 minutes it again starts normally before going black aga

  • Using firefox 37 it auto refreshes when I enter @ why

    os is win xp Have two issues a Everytime i enter @ or the _ symbol while logging into any website the page auto refreshes and I am not able to login this problem is also encountered in ie 8 b Also a default yahoo search tab opens up everytime I start

  • Newb Question.

    I have built an Express Database using AWM that reads data from a relational table, performs some calculations and populates a different relational table. This all works great when I call the program from the OLAP Worksheet. Now I want my JSP page to