Problem with Data load from file

Hi,
if i try to load data from an comma seperated file into oracle, i get an error that the page cannot be displayed, after the dialog where to specify the file and the separation method.
My Browser does not try long to open the page approx. 1 sec. ...
Anyone have a Idea about that ?
P.S I know my englisch is horribile, sorry

A known bug. See below for a solution to set the timeout. Remember to reboot the PC for the changes take effect.
See Re: Problem with importing HTML DB applications

Similar Messages

  • Problem with date format from Oracle DB

    Hi,
    I am facing a problem with date fields from Oracle DB sources. The date format of the field in DB table is 'Date base type is DATE and DDIC type is DATS'.
    I mapped the date fields to Date characters in BI. Now the data that comes to PSA is in weird format. It shows like -0.PR.09-A
    I have changing the field settings in DataSource  to internal and external and also i have tried mapping these date fields to text fields with out luck. All delivers the same format.
    I have also tried using conversion routines like, CONVERSION_EXIT_IDATE_INPUT to change format. It also delivers me the same old result.
    If anybody of you have any suggestions or if anybody have you experienced such probelms, Please share your experience with me.
    Thanks in advance.
    Regards
    Varada

    Thanks for all your reply. I can only the solutions creating view in database. I want some solution to be done in BI. I appreciate if some of you have idea in it.
    The issue again in detail
    I am facing an issue with date fields from oracle data. The data that is sent from Oracle is in the format is -0.AR.04-M. I am able to convert this date in BI with conversion routine in BI into format 04-MAR-0.
    The problem is,  I am getting data of length 10 (Output format) in the format -0.AR.04-M where the month is not in numericals. Since it is in text it is taking one character spacing more.
    I have tried in different ways to convert and increased the length in BI, the result is same. I am wondering if we can change the date format in database.
    I am in puzzle with the this date format. I have checked other Oracle DB connections data for date fields in BI, they get data in the format 20.081.031 which will allow to convert this in BI. Only from the system i am trying creating a problem.
    Regards
    Varada

  • Help! Problem with reading objects from file

    I wrote a "Library" program for an assignment, and one of the requirements is that the library store all of its information to file upon exit, and reload this information from file when run.
    Well, the writing to file part is working. I'm using a FileOutputStream object and an ObjectOutputStream object. I can tell from the file size of the .dat file that information is going into it.
    But what I can't do is read from file. For that, I'm using a FileInputStream and an ObjectInputStream. I keep getting this exception:
    java.io.EOFException
         at java.io.DataInputStream.readInt(Unknown Source)
         at java.io.ObjectInputStream$BlockDataInputStream.readInt(Unknown Source)
         at java.io.ObjectInputStream.readInt(Unknown Source)
         at Library.readDataFromFile(Library.java:350)
         at Library.<init>(Library.java:63)
         at LibraryDriver.main(LibraryDriver.java:6)I looked this exception up and it says it's thrown when a data input stream unexpectedly ends....But I am instantiating the input streams just before I try to read from file:
                            fileInStream = new FileInputStream(libraryFile);
                   objInStream = new ObjectInputStream(fileInStream);
                   Object[] objectArray = new Object[objInStream.readInt()];Both input streams have methods that "return the number of bytes that can be read from this file input stream without blocking". Just for kicks, I tried writing that number to the console.
    For the FileInputStream, I get 404 bytes.
    For the ObjectInputStream, I get 0 bytes.
    So I guess it's a problem with the ObjectInputStream? Anyone have any suggestions as to how I can fix this, please?

    Yep, here's the relevant code from the writeToFile() method:
                          for (int i = 0; i < libraryAuthors.length; i++) {
                        currentAlphaAuthorList = libraryAuthors;
                        for (int j = 0; j < currentAlphaAuthorList.size(); j++) {
                             currentAuthor = (Author) currentAlphaAuthorList.get(j);
                             objOutStream.writeObject(currentAuthor);
                   objOutStream.flush();
                   objOutStream.close();

  • Problem with saving/loading a file

    hi everyone,
    i hav a program that consists of animal that are jlabels with icons and i put these animals in an array and save them. when i save the file i catch this error message: sun.awt.image.ToolkitImage
    when i try to load the file i catch this error: writing aborted; java.io.NotSerializableException: sun.awt.image.ToolkitImage
    Now i got this :public class Animal extends JLabel implements MouseListener, Serializable{, for my animal class and every other class associated with the animal class also implements Serializable.
    i think its because im using an Image object in another serialised class. what can i do to fix this thing (Does it have anything to do with tooltiptext coz i got that for each jlabel). Any help is appreciated
    regards,
    gher111
    Message was edited by:
    prodigy111

    Most image implementations are not Seriliazable so if you have a field possibly in an outer class which refers to an object etc which contains an image you have a problem.

  • Problems with data controls from java classes in JSF pages.

    Hi! We have a problem in our Application that we are developing with JSF pages using Data Controls generated from facades java classes. When we running a page in debug mode and the page are loading, if we insert a breakpoint in the first line of method referenced in the data control, the execution enter two times in the method, and this is a problem for us. How to solve this?
    We are using JDeveloper 11.1.1.2 with ADF faces.

    You might need to play around with the refresh property of the action binding.
    http://download.oracle.com/docs/cd/E15523_01/web.1111/b31974/adf_lifecycle.htm#BJECHBHF

  • Problems with data migration from E-Business Suite 11i to OID.

    Hi everyone i'm trying to migrate data between Oracle E-Business Suite Release 11i and Oracle Internet Directory.
    I'm following the guidelines from the document: Integrating Oracle E-Business Suite Release 11i with Oracle Internet Directory and Oracle Single Sign On.
    I have created a intermediate LDIF file that will converted to final LDIF file by ldifmigrator utility. But an error occurs when I try to disable a certain profile in the phase of converting the Intermediate LDIF file to final LDIF file. I am running the oidprovtool as cn=orcladmin and the syntax is as follows:
    oidprovtool operation=disable \
    ldap_host=portal10g.skyrr.is \
    ldap_port=3060 \
    ldap_user_dn=cn=oracleadm \
    ldap_user_password=mypasswd \
    application_dn="orclApplicatonCommonName=K95LSP,cn=EBusiness,cn=Products,cn=OracleContext,dc=skyrr,dc=is" \
    profile_mode=BOTH
    The command returns:
    ERROR: Invalid directory credentials. Please check ldap_user_dn and ldap_user_password parameters.
    I'm able to bind whith ldapbind and the syntax is as follows:
    ldapbind -h portal10g.skyrr.is \
    -p 3060
    -D cn=orcladmin
    -w ******
    Can anyone help my to resolve the "Invalid directory credentials" error so that I can continue my Applications 11i migration.
    I must admit that i'm new to Oracle Internet Directory and all advices and pointers are highly appreciated.
    Regards,
    Sammi

    Hello,
    As Scott says, the fact that you're using HTTPS rather than HTTP is pretty much transparent to the mod_plsql handler (well in the respect that you don't really need to do anything 'special' in your application I mean).
    John.
    http://jes.blogs.shellprompt.net
    http://apex-evangelists.com

  • Data loading from files

    Hi there,
    I have a schema with 2 dimensions time (It contains 2 years 1990 -1991 and has no hierarchy) and geography (levels:country-continent).
    I need to mention that i have 2 groups of files:
    1) for year 1990
    2) for year 1991
    Each group represents the data for each year. In each group there are data for the measures of cube.
    Also, each file has a specific format. Each one has 2 columns: first column contains the countries and second column contain data for each specific country.
    For example:
    measure1_1990.txt
    country1 data1
    country2 data2
    I need help on loading these data in my cube. The reason why i ask is that in the TUTORIAL we only maintain our cube and data is loaded.
    Could you help me loading data in my cubes by this batch of files?
    thanks in advance

    As u mentioed in your post should bee excel save as csv why need semicolon just check out and analyze
    1;1;99991231;10000101;0.99;0.00

  • Issue with Data Load from R/3

    Hi,
    I have created a generic data source in R/3 based on the option Extraction from SAP Query. the SAP Query (Infoset) created in the TCODE SQ02 is based on the tables BKPF and BSEG. I have generated a Data source on this and successfully replicate dinto BW but when i trigger the load uing infopackage it immediately fails and gives the following messages.
    *If the source system is a Client Workstation, then it is possible that the file that you wanted to load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the request.*
    +Job terminated in source system --> Request set to red+
    Can you point where the issues is. I have tested the source system connection and it is fine as other extractors are working fine and data can be pulled
    Thanks
    Rashmi.

    Hi Rashmi,
    Try the following:
    - RSA3 in source system -> test extraction OK?
    - Shortdump (ST22) found in BW or R/3?
    - Locate the job of the extraction in R/3 (SM37) and look at the joblog...
    Hope this leads you to the cause...
    Grtx
    Marco

  • Problem with data distribution from HCM to e-Recruiting

    Hello masters:
    I have a problem when I try to distribute data from HCM to e-Recruiting. I´m getting code 51 just for the PA infotypes segments (E1P0000, E1P0001, etc.). The error message tha I am receiving is:
    "Error in subroutine read_namtb for structure of infotype".
    My e-Recruiting system is SAP EHP 1 for SAP NetWeaver 7.0, so I'm using Business Partners for storing data from PA infotypes.
    I'm using message type HRMD_ABA, basic type HRMD_ABA05.
    I think it could be the implementation HRRCF00_INBD_NEWMOD from the badi HRALE00INBOUND_IDOC, because it usually get inactive even though I go to activate it.
    Does anybody knows what can I do?
    Thanks in advance!
    Edited by: Rodrigo Arenas Arriola on Jun 25, 2009 2:45 AM

    Hello Rodrigo,
    Ifyou are using EhP1 for NW 7.0 you should have E-Recruiting 600 EhP 4 on your system.
    I never encountered this special error message so far, so i have to "guess". The message indicates that the system had trouble reading some infotype information.
    The ALE distribution for EhP4 is storing the PA data in new infotypes 558*. The error you get points to a missing BAdI activation. If the BAdIs are not activated correctly, ALE will try to write the information from the PA infotypes 1 to 1 into the E-Recruiting system. But E-Recruiting does not know the PA infotypes which should exactly lead to the error you get.
    Could you check the following settings:
    BAdI                     Implementation 
    HRSYNC_P                 CONV_HR_DATA_TO_EREC -> active
    HRALE00INBOUND_IDOC      HRRCF00_INBD_NEWMOD -> active
    HRALE00SPLIT_INBOUND     HR_INB_PROCESS_IDOC -> inactive
    HRALE00INBOUND_IDOC      HRRCF00_DELETE_SPREL -> inactive
    Kind Regards
    Roman

  • Problem with reading numbers from file into double int array...

    Okay, this is a snippet of my code:
    public void readMap(String file){
            try {
                URL url = getClass().getResource(file);
                System.out.println(url.getPath());
                BufferedReader in = new BufferedReader(new FileReader(url.getPath()));
                String str;
                String[] temp;
                int j=0;
                while ((str = in.readLine()) != null) {
                    temp = str.split(",");
                    for(int i=0;i<temp.length;i++){
                        map[j] = java.lang.Integer.parseInt(temp[i]);
    j++
    in.close();
    } catch (IOException e) {
    System.out.println("Error: "+ e.toString());
    map[][] is a double int array. Now, the code is running through each line of the text file (with each line looking like this: 0,3,6,2,2,3,1,5,2,3,5,2), and I want to put the numbers into a corresponding double int array (which is where map[][] comes in). Now, this code WOULD work, except I need to set the sizes of each array before I start adding, but I don't know how to get the exact sizes.. how can I get around this issue?
    Message was edited by:
    maxfarrar

    You can do a two-dimensional ArrayList? That syntax
    you wrote didn't work.
    I tried doing:
    private ArrayList<ArrayList><Integer>> map;
    Your syntax is just wrong -- or, this forum software has bug in handling angle brackets.
    ArrayList<ArrayList<Integer>>...The closing angle bracket after the second 'ArrayList' is the one generated by the bug. Basically, it should be T<T<T2>> without spaces. Oh, for that matter:
    Arraylist<ArrayList<Integer>>

  • Error is data loading from 3rd party source system with DBCONNECT

    Hi,
    We have just finished an upgrade of SAP BW 3.10 to SAP NW 7.0 EHP1.
    After the upgrade, we are facing a problem with data loads from a third party Oracle source system using DBConnect.
    The connection is working OK and we can see the tables in the source system. But we cannot load the data.
    The error in the monitor is as follows:
    'Error message from the source system
    Diagnosis
    An error occurred in the source system.
    System Response
    Caller 09 contains an error message.
    Further analysis:
    The error occurred in Extractor .
    Refer to the error message.'
    But, unfortunately, the error message has no further information.
    If we look at the job log in sm37, the job finished with the following log -                                                                               
    27.10.2009 12:14:19 Job started                                                                                00           516          S 
    27.10.2009 12:14:19 Step 001 started (program RSBATCH1, variant &0000000000119, user ID RXSAHA)                    00           550          S 
    27.10.2009 12:14:23 Start InfoPackage ZPAK_4FMNJ2ZHNNXC6HT3A2TYAAFXG                                              RSM1          797          S 
    27.10.2009 12:14:24 Element NOAUTHORITYCHECK is not available in the container                                     OL           356          S 
    27.10.2009 12:14:24 InfoPackage ZPAK_4FMNJ2ZHNNXC6HT3A2TYAAFXG created request REQU_4FMXSQ6TLSK5CYLXPBOGKF31G     RSM1          796          S 
    27.10.2009 12:14:24 Job finished                                                                                00           517          S 
    In a BW 3.10 system, there is no  message related to element NOAUTHORITYCHECK. So, I am wondering if this is something new in NW 7.0.
    Thanks in advance,
    Rajib

    There will be three things to get the errors like this
    1.RFC CONNECTION FAILED
    2.CHECK THE SOURCE SYSTEM
    3.CHECK IT OUT WITH Oracle Consultants WEATHER THEY ARE FILLING UP THE LOADS.TELL THEM TO STOP
    4.CHECK I DOC PROCESSING
    5.FINALLY MEMORY ISSUES.
    6.CATCH THE DATA SOURCE FIRST CHANGE IT AND THEN ACTIVATE AND RUN THE LOAD
    7.LAST IS MEMORY ISSUE.
    and also Check the RFC connection in SM59 If  it is ok then
    check the SAP note : 692195 for authorization
    Santosh

  • Problem with data

    hai
         i have problem with data load,where for 12th month i dont have any actual data(related to finance) in r/3 but when i saw in cube i have data,i dont know how the data is coming from r/3 without any actual data.if so how can i check whether that data come from r/3 or some other place and if i found that how can i delete that data ,from where should i delete that data.please help to solve this issue

    Hi,
    If there is data in Infocube,  there should be request. use content of data target option in manage and  select the any one data record which is in scope and find the request number. by using this find the same request in Request tab of manage infocube. click on monitor symbol in that selected request, which take you to monitor screen where you can find the source system and which IP, and DataSource used in uploading.
    if there is no ALE scenario in your system landscape then there is no possibilities of missing data in R/3 and BW. so check once again the data flow and availability.
    Reg,
    Vish.

  • URGENT ! JDEV 10.1.2 Problem with data control generated from session bean

    I got a problem with data control generated from session bean which return a collection of data transfer object.
    The dto's seem to be correct. The session bean load correctly the data into and the object's are plenty of data. Using the console to display the dto content is ok.
    When generating a data control from this session bean and associate the dto included in the collection only the first object level and one-to-one dto object are correctly setted in the data control. Object that represent collection into the dto (one-to-many foreign key) are setted as collection with an iterator but the structure of the object is not setted. I don't know how to associate this second level of collection with the dto bean class to obtain the attributes definition.
    I created a case with hr schema like the hrApp demo application in the tutorial with departments and employees table. I got the same problem.
    Is it a bug ?
    It exists a workaround to force the data control to understand the collection data structure ?
    Help is welcome ! this is urgent !!!

    we found the problem by assigning the child dto bean class to the node representing the iterator in the xml file corresponding to the master dto.

  • Data loading from flat file to cube using bw3.5

    Hi Experts,
                       Kindly give  me the detailed steps with screens  about Data loading from flat file to cube using bw3.5
           ...............Please

    Hi ,
    Procedure
    You are in the Data Warehousing Workbench in the DataSource tree.
           1.      Select the application components in which you want to create the DataSource and choose Create DataSource.
           2.      On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy.
    The DataSource maintenance screen appears.
           3.      Go to the General tab page.
                                a.      Enter descriptions for the DataSource (short, medium, long).
                                b.      As required, specify whether the DataSource builds an initial non-cumulative and can return duplicate data records within a request.
                                c.      Specify whether you want to generate the PSA for the DataSource in the character format. If the PSA is not typed it is not generated in a typed structure but is generated with character-like fields of type CHAR only.
    Use this option if conversion during loading causes problems, for example, because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type.
    In this case, after you have activated the DataSource you can load data into the PSA and correct it there.
           4.      Go to the Extraction tab page.
                                a.      Define the delta process for the DataSource.
                                b.      Specify whether you want the DataSource to support direct access to data.
                                c.      Real-time data acquisition is not supported for data transfer from files.
                                d.      Select the adapter for the data transfer. You can load text files or binary files from your local work station or from the application server.
    Text-type files only contain characters that can be displayed and read as text. CSV and ASCII files are examples of text files. For CSV files you have to specify a character that separates the individual field values. In BI, you have to specify this separator character and an escape character which specifies this character as a component of the value if required. After specifying these characters, you have to use them in the file. ASCII files contain data in a specified length. The defined field length in the file must be the same as the assigned field in BI.
    Binary files contain data in the form of Bytes. A file of this type can contain any type of Byte value, including Bytes that cannot be displayed or read as text. In this case, the field values in the file have to be the same as the internal format of the assigned field in BI.
    Choose Properties if you want to display the general adapter properties.
                                e.      Select the path to the file that you want to load or enter the name of the file directly, for example C:/Daten/US/Kosten97.csv.
    You can also create a routine that determines the name of your file. If you do not create a routine to determine the name of the file, the system reads the file name directly from the File Name field.
                                  f.      Depending on the adapter and the file to be loaded, make further settings.
    ■       For binary files:
    Specify the character record settings for the data that you want to transfer.
    ■       Text-type files:
    Specify how many rows in your file are header rows and can therefore be ignored when the data is transferred.
    Specify the character record settings for the data that you want to transfer.
    For ASCII files:
    If you are loading data from an ASCII file, the data is requested with a fixed data record length.
    For CSV files:
    If you are loading data from an Excel CSV file, specify the data separator and the escape character.
    Specify the separator that your file uses to divide the fields in the Data Separator field.
    If the data separator character is a part of the value, the file indicates this by enclosing the value in particular start and end characters. Enter these start and end characters in the Escape Charactersfield.
    You chose the; character as the data separator. However, your file contains the value 12;45 for a field. If you set u201C as the escape character, the value in the file must be u201C12;45u201D so that 12;45 is loaded into BI. The complete value that you want to transfer has to be enclosed by the escape characters.
    If the escape characters do not enclose the value but are used within the value, the system interprets the escape characters as a normal part of the value. If you have specified u201C as the escape character, the value 12u201D45 is transferred as 12u201D45 and 12u201D45u201D is transferred as 12u201D45u201D.
    In a text editor (for example, Notepad) check the data separator and the escape character currently being used in the file. These depend on the country version of the file you used.
    Note that if you do not specify an escape character, the space character is interpreted as the escape character. We recommend that you use a different character as the escape character.
    If you select the Hex indicator, you can specify the data separator and the escape character in hexadecimal format. When you enter a character for the data separator and the escape character, these are displayed as hexadecimal code after the entries have been checked. A two character entry for a data separator or an escape sign is always interpreted as a hexadecimal entry.
                                g.      Make the settings for the number format (thousand separator and character used to represent a decimal point), as required.
                                h.      Make the settings for currency conversion, as required.
                                  i.      Make any further settings that are dependent on your selection, as required.
           5.      Go to the Proposal tab page.
    This tab page is only relevant for CSV files. For files in different formats, define the field list on the Fields tab page.
    Here you create a proposal for the field list of the DataSource based on the sample data from your CSV file.
                                a.      Specify the number of data records that you want to load and choose Upload Sample Data.
    The data is displayed in the upper area of the tab page in the format of your file.
    The system displays the proposal for the field list in the lower area of the tab page.
                                b.      In the table of proposed fields, use Copy to Field List to select the fields you want to copy to the field list of the DataSource. All fields are selected by default.
           6.      Go to the Fields tab page.
    Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If you did not transfer the field list from a proposal, you can define the fields of the DataSource here.
                                a.      To define a field, choose Insert Row and specify a field name.
                                b.      Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BI.
                                c.      Instead of generating a proposal for the field list, you can enter InfoObjects to define the fields of the DataSource. Under Template InfoObject, specify InfoObjects for the fields in BI. This allows you to transfer the technical properties of the InfoObjects into the DataSource field.
    Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field.
                                d.      Change the data type of the field if required.
                                e.      Specify the key fields of the DataSource.
    These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping.
                                  f.      Specify whether lowercase is supported.
                                g.      Specify whether the source provides the data in the internal or external format.
                                h.      If you choose the external format, ensure that the output length of the field (external length) is correct. Change the entries, as required.
                                  i.      If required, specify a conversion routine that converts data from an external format into an internal format.
                                  j.      Select the fields that you want to be able to set selection criteria for when scheduling a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage.
                                k.      Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage.
                                  l.      Under Field Type, specify whether the data to be selected is language-dependent or time-dependent, as required.
           7.      Check, save and activate the DataSource.
           8.      Go to the Preview tab page.
    If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview.
    This function allows you to check whether the data formats and data are correct.
    For More Info:  http://help.sap.com/saphelp_nw70/helpdata/EN/43/01ed2fe3811a77e10000000a422035/content.htm

  • I have one problem with Data Guard. My archive log files are not applied.

    I have one problem with Data Guard. My archive log files are not applied. However I have received all archive log files to my physical Standby db
    I have created a Physical Standby database on Oracle 10gR2 (Windows XP professional). Primary database is on another computer.
    In Enterprise Manager on Primary database it looks ok. I get the following message “Data Guard status Normal”
    But as I wrote above ”the archive log files are not applied”
    After I created the Physical Standby database, I have also done:
    1. I connected to the Physical Standby database instance.
    CONNECT SYS/SYS@luda AS SYSDBA
    2. I started the Oracle instance at the Physical Standby database without mounting the database.
    STARTUP NOMOUNT PFILE=C:\oracle\product\10.2.0\db_1\database\initluda.ora
    3. I mounted the Physical Standby database:
    ALTER DATABASE MOUNT STANDBY DATABASE
    4. I started redo apply on Physical Standby database
    alter database recover managed standby database disconnect from session
    5. I switched the log files on Physical Standby database
    alter system switch logfile
    6. I verified the redo data was received and archived on Physical Standby database
    select sequence#, first_time, next_time from v$archived_log order by sequence#
    SEQUENCE# FIRST_TIME NEXT_TIME
    3 2006-06-27 2006-06-27
    4 2006-06-27 2006-06-27
    5 2006-06-27 2006-06-27
    6 2006-06-27 2006-06-27
    7 2006-06-27 2006-06-27
    8 2006-06-27 2006-06-27
    7. I verified the archived redo log files were applied on Physical Standby database
    select sequence#,applied from v$archived_log;
    SEQUENCE# APP
    4 NO
    3 NO
    5 NO
    6 NO
    7 NO
    8 NO
    8. on Physical Standby database
    select * from v$archive_gap;
    No rows
    9. on Physical Standby database
    SELECT MESSAGE FROM V$DATAGUARD_STATUS;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    ARC1: Becoming the heartbeat ARCH
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery not using Real Time Apply
    MRP0: Background Media Recovery terminated with error 1110
    MRP0: Background Media Recovery process shutdown
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 2148
    RFS[1]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[2]: Assigned to RFS process 2384
    RFS[2]: Identified database type as 'physical standby'
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[3]: Assigned to RFS process 3188
    RFS[3]: Identified database type as 'physical standby'
    Primary database is in MAXIMUM PERFORMANCE mode
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[4]: Assigned to RFS process 3168
    RFS[4]: Identified database type as 'physical standby'
    RFS[4]: No standby redo logfiles created
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[3]: No standby redo logfiles created
    10. on Physical Standby database
    SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;
    PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 0 0 0 0
    RFS IDLE 1 9 13664 2
    RFS IDLE 0 0 0 0
    10) on Primary database:
    select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    ARCm: Becoming the 'no FAL' ARCH
    ARCm: Becoming the 'no SRL' ARCH
    ARCd: Becoming the heartbeat ARCH
    Error 1034 received logging on to the standby
    Error 1034 received logging on to the standby
    LGWR: Error 1034 creating archivelog file 'luda'
    LNS: Failed to archive log 3 thread 1 sequence 7 (1034)
    FAL[server, ARCh]: Error 1034 creating remote archivelog file 'luda'
    11)on primary db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00004_0594204176.001 4 NO
    Luda 4 NO
    Luda 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00005_0594204176.001 5 NO
    Luda 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00006_0594204176.001 6 NO
    Luda 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00007_0594204176.001 7 NO
    Luda 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\IRINA\ARC00008_0594204176.001 8 NO
    Luda 8 NO
    12) on standby db
    select name,sequence#,applied from v$archived_log;
    NAME SEQUENCE# APP
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00004_0594204176.001 4 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00003_0594204176.001 3 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00005_0594204176.001 5 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00006_0594204176.001 6 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00007_0594204176.001 7 NO
    C:\ORACLE\PRODUCT\10.2.0\ORADATA\LUDA\ARC00008_0594204176.001 8 NO
    13) my init.ora files
    On standby db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0\admin\luda\adump'
    *.background_dump_dest='C:\oracle\product\10.2.0\admin\luda\bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\luda\luda.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0\admin\luda\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_unique_name='luda'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='luda'
    *.fal_server='irina'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/luda/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_2='SERVICE=irina LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/irina/','C:/oracle/product/10.2.0/oradata/luda/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0\admin\luda\udump'
    On primary db
    irina.__db_cache_size=79691776
    irina.__java_pool_size=4194304
    irina.__large_pool_size=4194304
    irina.__shared_pool_size=75497472
    irina.__streams_pool_size=0
    *.audit_file_dest='C:\oracle\product\10.2.0/admin/irina/adump'
    *.background_dump_dest='C:\oracle\product\10.2.0/admin/irina/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='C:\oracle\product\10.2.0\oradata\irina\control01.ctl','C:\oracle\product\10.2.0\oradata\irina\control02.ctl','C:\oracle\product\10.2.0\oradata\irina\control03.ctl'
    *.core_dump_dest='C:\oracle\product\10.2.0/admin/irina/cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_file_name_convert='luda','irina'
    *.db_name='irina'
    *.db_recovery_file_dest='C:\oracle\product\10.2.0/flash_recovery_area'
    *.db_recovery_file_dest_size=2147483648
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=irinaXDB)'
    *.fal_client='irina'
    *.fal_server='luda'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(irina,luda)'
    *.log_archive_dest_1='LOCATION=C:/oracle/product/10.2.0/oradata/irina/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=irina'
    *.log_archive_dest_2='SERVICE=luda LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=luda'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_max_processes=30
    *.log_file_name_convert='C:/oracle/product/10.2.0/oradata/luda/','C:/oracle/product/10.2.0/oradata/irina/'
    *.open_cursors=300
    *.pga_aggregate_target=16777216
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=167772160
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='C:\oracle\product\10.2.0/admin/irina/udump'
    Please help me!!!!

    Hi,
    After several tries my redo logs are applied now. I think in my case it had to do with the tnsnames.ora. At this moment I have both database in both tnsnames.ora files using the SID and not the SERVICE_NAME.
    Now I want to use DGMGRL. Adding a configuration and a stand-by database is working fine, but when I try to enable the configuration DGMGRL gives no feedback and it looks like it is hanging. The log, although says that it succeeded.
    In another session 'show configuration' results in the following, confirming that the enable succeeded.
    DGMGRL> show configuration
    Configuration
    Name: avhtest
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    avhtest - Primary database
    avhtestls53 - Physical standby database
    Current status for "avhtest":
    Warning: ORA-16610: command 'ENABLE CONFIGURATION' in progress
    It there anybody that experienced the same problem and/or knows the solution to this?
    With kind regards,
    Martin Schaap

Maybe you are looking for

  • ICloud on a New Dell Desktop

    I just got a new Dell desktop PC in my office.  It's a managed machine (ergo: I have NO admin rights) and there are all kinds of s/w items that one will never be allowed to load.  My previous desktop PC was running Windows 7 Enterprise (32 bit).  My

  • How can I make WL 8.1 flush the cache and/or pool for 1.1 EJBs

    Hi, I'm using 1.1 deployment descriptors for my CMP entity bean that were previously used in the WL 5.1 version of my project. Things do get deployed but I've observed confusing information when monitoring the EJB via Admin Console. What appears is t

  • Attempting to use HTTP probe

    We're currently using the TCP probe for a load balanced app, however will need to change it to HTTP. The back-end servers are Window based and are running IIS. There's an app team, another team that supports IIS and another team that support the serv

  • Typekit font usage (2 interrelated questions)

    Dear all I'm having two problems with my fonts. (1) the typekit fonts arn't loading before the page, making the alternate fonts load  (and the page load awkwardly) then replacing them with the typekit fonts. (2) While I dislike the alternate fonts, I

  • SLOC Determination

    All, Here is my scenario: PLANT 1234 is assigned to SLOC 1 & SLOC 2. Goods receipt of PRODUCT ABC is received into SLOC 1. ABC is then transferred to SLOC 2. Outbound delivery PGI happens from SLOC2. Availability check is performed at plant level. I