Problem in date sorting

hi
i have an array of date and i have to sort it .is it possible?how can i sort date?
ok
it's urgent.
kamlesh solanki

Since Date implements compareTo correctly, you can just send your array into Arrays.sort, adn you're done
see:
http://java.sun.com/j2se/1.4.2/docs/api/java/util/Date.html
http://java.sun.com/j2se/1.4.2/docs/api/java/util/Arrays.html
Gil

Similar Messages

  • Help needed in Sorting Columns (date sorts alphabetically)

    Hi All,
    I have a report query as follows:
    select to_date(period,'MON-YYYY') start_date
    ,exp_type
    ,id
    ,acct_ref
    ,dept_ref
    ,item_date
    ,amt
    from test;
    I need to display the report in the same order as select statement (start_date,exp_type,id etc). I have given default sort order in report attributes and user enabled sort on all columns. The report attributes look as shown below:
    column_name / sort / sort_seq
    start_date / yes / 3
    exp_type / yes / 1
    id / yes / -
    acct_ref / yes / -
    dept_ref / yes / 2
    item_date / yes / 4
    Now my problem is that the start date sorts alphabetically (as varchar) instead of chronologically (date wise). How can I resolve this?
    If I change my default sort order so that start_date is sorted 1st, it works perfectly fine. Is there any restriction for APEX sort that says the column displayed needs to be sorted in the same order? How do I overcome it? Or is it possible to overcome it?
    Any suggestions would be appreciated.
    Thanks,
    Sumana

    Hi,
    Thanks. But no I cannot change it to date, as there are many applications using it for displaying the period. But I found an equivalent date column for that period and tried using that. It was not working.
    I did further analysis and found that even to_char(period) works fine. The problem was something else. In my region query the select statement was
    select exp_type
    ,id
    ,amt
    ,to_date(period,'MON-YYYY')
    from test.
    I then had in region attributes used the arrows to change order of display, so that period appears first. That was the culprit. When I change my region query, to match the order in region attributes the date column as well as to_date(period) both work fine.
    I never knew that the order of select statement and that in region attributes should match each other!!! And hence when posting I did not give importance to the order of my select list. Sorry for that.
    The mystery is how was it working when i changed the sort order to say "period" 1st and then exp_type (2nd)
    Thanks everyone for you help.
    Thanks,
    Sumana

  • Problems transferring data from MacBook Pro to MacBook Pro

    I just bought a new MacBook Pro and I am having problems transferring data from my old MacBook Pro. I am using an Apple Thunderbolt to FW adapter to hook both computers and I am using the Migration Assistant. Everything looked ok until the "Transferring Your Information" screen got stuck. It says that it is "Transferring your Application folder..." And it has been stuck for at least the last two hours giving a message that says "About 3 minutes remaining..." This is starting to get ridiculously annoying. Options?

    I guess I am finding my own way as time progresses and frustration increases but these are two things that I have found so far:
    1. Transfering from one machine to another does not seem to be the most efficient way to do this so using the TimeMachine backup must be the way to go...
    HOWEVER
    2. In this process I also found another "complication" called: Legacy FaultVault - My computer is encripted and I used this now so called "Legacy FaultVault" app to do it (when it was no a Legacy!). The thing is that you can not transfer documents from your TimeMachine backup to your new Mac if the Legacy Fault Valut is active on your old machine so you first need to desactivate the encription. HOWEVER, this brought another issue...when I tried to desactivate the encription using the Legacy FaultVault screen I got a message saying that I did not have enough disk space to allow me to do that....Isn't that great??? This is a problem by itself and it seems to have some sort of solution I was reading about (not straight forward at all). What I am trying to do now is to copy the folder that contains my files and just paste them on my new machine to see if that works....it looks like my apps are on my new machine, unsure if they are working...need to test this next....
    I guess the transfer machine to machine could have sorted out the conflict eventually...after tens of hours or maybe not! Who knows....
    A very frustrated Apple fan...
    I love Apple products but this is an ISSUE...I feel like there is lack of adecuate documentation around to deal with this or the data transfering apps should be able to deal with this on a more efficient way....just saying...

  • Is Mavericks good to go, or are there lots of teething problems to be sorted yet?

    Is Mavericks good to go, or are there lots of teething problems to be sorted yet?

    Donald Morgan wrote:
    There are normally very small bugs but at this point will not affect the operation of Mavericks. I might add is the most stable and smoothest Operating System since OS9-2
    Good to know!  OS 9.1 is my favorite OS to date.  Tiger & SL running in close 2nd. 
    Me personally, I will not upgrade until OS 10.9.02 is released.  

  • Date sorting issue

    Check out the date sorting problems here:
    http://ray.camdenfamily.com/spry/blog2.cfm
    It seems almost random.

    Yup the JS Date object is reporting that the date format is
    invalid in both IE and Firefox. We may have to add code to manually
    parse date formats. I've added that to the list of things we have
    to do.
    Ray, what are you using to generate your date format?
    The good news is the way your dates are formatted, it will
    sort properly using straight text sorting. If you leave off your
    setColumnType call for the date it sorts properly.
    --== Kin ==--

  • Problem while  data processing TRANSACTION data from DSO to CUBE

    Hi Guru's,
    we are facing problem while  data processing TRANSACTION data from DSO to CUBE. data packets  processing very slowly  and updating .Please help me regarding this.
    Thanks and regards,
    Sridhar

    Hi,
    I will suggest you to check a few places where you can see the status
    1) SM37 job log (give BIrequest name) and it should give you the details about the request. If its active make sure that the job log is getting updated at frequent intervals.
    2) SM66 get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. See if its accessing/updating some tables or is not doing anything at all.
    If its running and if you are able to see it active in SM66 you can wait for some time to let it finish.
    3) RSMO see what is available in details tab. It may be in update rules.
    4) ST22 check if any short dump has occured.
    You can also try SM50 / SM51 to see what is happening in the system level like reading/inserting tables etc.
    If you feel its active and running you can verify by checking if the number of records has increased in the cube.
    Thanks,
    JituK

  • Having a problem with dates when I send my numbers doc to excel. dates are all out and that they have to cut and paste individual entries onto their spreadsheet. Any idea how I can prevent this

    having a problem with dates when I send my numbers doc to excel. dates are all out and that they have to cut and paste individual entries onto their spreadsheet. Any idea how I can prevent this.
    I'm using Lion on an MBP and Numbers is the latest version

    May you give more details about what is wrong with your dates ?
    M…oSoft products aren't allowed on my machines but I use LibreOffice which is a clone of Office.
    When I export from Numbers to Excel and open the result with LibreOffice, the dates are correctly treated.
    To be precise, dates after 01/01/1904 are correctly treated. dates before 01/01/1904 are exported as strings but, as it's flagged during the export process, it's not surprising.
    Yvan KOENIG (VALLAURIS, France) mardi 3 janvier 2012
    iMac 21”5, i7, 2.8 GHz, 12 Gbytes, 1 Tbytes, mac OS X 10.6.8 and 10.7.2
    My iDisk is : http://public.me.com/koenigyvan
    Please : Search for questions similar to your own before submitting them to the community
    For iWork's applications dedicated to iOS, go to :
    https://discussions.apple.com/community/app_store/iwork_for_ios

  • Problem with date format when ask prompt web-intelligence

    Bo XIR2 with 5 SP. Instaled on Windows 2003 with support Russian.
    Inside BO every labels, buttons - use russian. But when invoke web-report and Prompt appear there is problem with date format.
    Looks like korean format of date 'jj.nn.aaa H:mm:ss'.  I see system settings of date in Win .. everything right
    What i have to do?
    Where i can change format date for bo?

    GK, try this...
    decode(instr(packagename.functionname(param1 ,param2),'2400'), 0, to_date(to_char(to_date(rtrim(packagename.functionname(param1 ,param2),'(PT)'), 'Month dd, yyyy "at" hh24mi'),'mm/dd/yyyy hh24mi'),'mm/dd/yyyy hh24mi'),
                                                                      to_date(to_char(to_date(rtrim(packagename.functionname(param1 ,param2),'(PT)'), 'Month dd, yyyy "at" "2400"')+1,'mm/dd/yyyy "0000"'),'mm/dd/yyyy "0000"'))-Marilyn

  • Problem with Date format

    Got one more problem Merilyn and Radhakrishnan...
    Regarding the soln y provided me earler with the thread "Problem with date format"...
    What is happening is....I am able to change the 2400 to 0000 but when it is changed from 2400 on jan 1st to 0000 the hour is changing but not the date....the date still remains as jan 1st instead of jan 2nd....
    Eg: Jan 1st 2400 -- changed to -- jan1st 0000
    instead of jan 2nd 0000
    Could you please help me in this issue...
    Thanks,
    GK

    GK, try this...
    decode(instr(packagename.functionname(param1 ,param2),'2400'), 0, to_date(to_char(to_date(rtrim(packagename.functionname(param1 ,param2),'(PT)'), 'Month dd, yyyy "at" hh24mi'),'mm/dd/yyyy hh24mi'),'mm/dd/yyyy hh24mi'),
                                                                      to_date(to_char(to_date(rtrim(packagename.functionname(param1 ,param2),'(PT)'), 'Month dd, yyyy "at" "2400"')+1,'mm/dd/yyyy "0000"'),'mm/dd/yyyy "0000"'))-Marilyn

  • Problem with date calculation

    I am a rookie in SAP i have a small problem with date.Do we have any function module to find out the first day in a month if we give out the system current date ?? Pls help me out.

    Hi,
    As Ganesan told,you can do.
    Here is the sample code.
    data v type sy-datum.
    data d type DTRESR-WEEKDAY.
    v+6(2) = '01'.
    v4(2) = sy-datum4(2).
    v0(4) = sy-datum0(4).
    CALL FUNCTION 'DATE_TO_DAY'
      EXPORTING
        date          = v
    IMPORTING
       WEEKDAY       = d.
    write d.

  • Problem with date format from Oracle DB

    Hi,
    I am facing a problem with date fields from Oracle DB sources. The date format of the field in DB table is 'Date base type is DATE and DDIC type is DATS'.
    I mapped the date fields to Date characters in BI. Now the data that comes to PSA is in weird format. It shows like -0.PR.09-A
    I have changing the field settings in DataSource  to internal and external and also i have tried mapping these date fields to text fields with out luck. All delivers the same format.
    I have also tried using conversion routines like, CONVERSION_EXIT_IDATE_INPUT to change format. It also delivers me the same old result.
    If anybody of you have any suggestions or if anybody have you experienced such probelms, Please share your experience with me.
    Thanks in advance.
    Regards
    Varada

    Thanks for all your reply. I can only the solutions creating view in database. I want some solution to be done in BI. I appreciate if some of you have idea in it.
    The issue again in detail
    I am facing an issue with date fields from oracle data. The data that is sent from Oracle is in the format is -0.AR.04-M. I am able to convert this date in BI with conversion routine in BI into format 04-MAR-0.
    The problem is,  I am getting data of length 10 (Output format) in the format -0.AR.04-M where the month is not in numericals. Since it is in text it is taking one character spacing more.
    I have tried in different ways to convert and increased the length in BI, the result is same. I am wondering if we can change the date format in database.
    I am in puzzle with the this date format. I have checked other Oracle DB connections data for date fields in BI, they get data in the format 20.081.031 which will allow to convert this in BI. Only from the system i am trying creating a problem.
    Regards
    Varada

  • Problem with date format dd/mm/yyyy. But I need to convert yyyy-mm-dd.

    Dear friends,
    I have the problem with date format. I receiving the date with the format dd/mm/yyyy. But I can upload to MySQL only in the format of yyyy-mm-dd.
    how should I handle this situation, for this I've created these code lines.But I have some problem with these line. please help me to solve this problem.
    String pattern = "yyyy-mm-dd";
    SimpleDateFormat format = new SimpleDateFormat(pattern);
    try {
    Date date = format.parse("2006-02-12");
    System.out.println(date);
    } catch (ParseException e) {
    e.printStackTrace();
    System.out.println(format.format(new Date()));
    this out put gives me Tue Apr 03 00:00:00 IST 2007
    But I need the date format in yyyy-mm-dd.
    regards,
    maza
    thanks in advance.

    Thanks Dear BalusC,
    I tried with this,
    rs.getString("DATA_SCAD1")// where the source from .xls files
    String pattern = "yyyy-MM-dd";
    SimpleDateFormat format = new SimpleDateFormat(pattern);
    try {
    Date date = format.parse("DATA_SCAD1");
    System.out.println(date);
    } catch (ParseException e) {
    e.printStackTrace();
    System.out.println(format.format(new Date()));
    this out put gives me Tue Apr 03 00:00:00 IST 2007
    But I want to display the date format in yyyy-mm-dd.
    regards,
    maza

  • "evdre encountered a problem retrieving data from the webserver"

    Hi
    We are using a big report which takes very long time to expand and sometimes we get the error message "evdre encountered a problem retrieving data from the webserver" when expanding the report, but not very often for most of our users. But we have one user who gets this error message almost every second time he expands the report. This user have a computer with same capacity as the rest of us, can there be some setting on the computer or in the client installtion the cause this problem?
    We are using BPC 5.1 SP 5
    /Fredrik

    Hi,
    This error occurs usually if we have huge data in combination of our dimensions.
    Even, if the selection of your memberset is not apt to the combination of the data present in the back end, you may encounter a data retrival error.
    regards
    sashank

  • Tp ended with error code 0247 - addtobuffer has problems with data- and/or

    Hello Experts,
    If you give some idea, it will be greatly appreciated.
    This transported issue started coming after power outage, sap system went hard shutdown.
    Then we brought up the system. Before that , we do not have this transport issue.
    our TMS landscape is
    DEV QA-PRD
    SED-SEQSEP
    DEV is having the TMS domain controller.
    FYI:
    *At OS level, when we do scp command using root user, it is fine for any TR.
    In STMS, while adding TR in SEQ(QA system), we  are getting error like this.
    Error:
    Transport control program tp ended with error code 0247
         Message no. XT200
    Diagnosis
         An error occurred when executing a tp command.
           Command:        ADDTOBUFFER SEDK906339 SEQ client010 pf=/us
           Return code:    0247
           Error text:     addtobuffer has problems with data- and/or
           Request:        SEDK906339
    System Response
         The function terminates.
    Procedure
         Correct the error and execute the command again if necessary.
    This is tp version 372.04.71 (release 700, unicode enabled)
    Addtobuffer failed for SEDK906339.
      Neither datafile nor cofile exist (cofile may also be corrupted).
    standard output from tp and from tools called by tp:
    tp returncode summary:
    TOOLS: Highest return code of single steps was: 0
    ERRORS: Highest tp internal error was: 0247

    when we do scp using sm69,
    SEDADM@DEVSYS:/usr/sap/trans/cofiles/K906339.SED SEQADM@QASYS:/usr/sap/trans/cofiles/.
    it throws the error like below,
    Host key verification failed.
                                                                                    External program terminated with exit code 1
    Thanks
    Praba

  • I have one problem with Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

Maybe you are looking for