Path for data loader

Hi
When I try to launch the dataload play user in DEV instance
i am given error about wrong dbc file being used
i check it
its shows a wrong path for my dbc file
any idea where to change this path
OS id RHEl 5
DB 11.2.0.1
EBS 12.1.2

Hi
Is the value of the variable "s_fnd_secure" in the $CONTEXT_FILE the same as the value of $FND_SECURE environment variable?
It probably is, but it can't hurt to check.
Have you tried stopping the Apps Web processes (and making sure that there are no process running on the O/S) and then restarting the web server?
This test might be useful ..
1) http://server:port/OA_HTML/jsp/fnd/aoljtest.jsp
2) Click on "Enter AOL/J Setup Test"
3) Run these two tests
a. Locate DBC File
b. Verify DBC Settings
Regards
Frank

Similar Messages

  • Sample SOAP request for Data Loader API

    Hi
    Can anyone please help me out in giving a sample SOAP request for Data Loader API .This is to say import 1K records from my system to the CRM instance I have .

    Log into the application and then click on Training and Support there is a WS Library of Information within the application

  • Missing Standard Dimension Column for data load (MSSQL to Essbase Data)

    This is similar error to one posted by Sravan -- however I'm sure I have all dimensions covered -- going from MS SQL to SunOpsys Staging to Essbase. It is telling me missing standard dimension, however I have all accounted for:
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last): File "<string>", line 23, in ? com.hyperion.odi.essbase.ODIEssbaseException: Missing standard dimension column for data load
    at com.hyperion.odi.essbase.ODIEssbaseDataWriter.loadData(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    I'm using multiple time period inputs -- BegBalance,Jul,Aug,Sep,Oct,Nov,Dec,Jan,Feb,Mar,Apr,May,Jun (target has all of those in place of Time Periods)
    I'm using hard coded input mapping for Metric, Scenario, Version, HSP_Rates and Currencies. -> 'Amount', 'Actual', 'Final', 'HSP_InputValue','Local' respectively.
    The only thing I can think of is that since I'm loading to each of the months in the Time Periods dimension (the reversal was set up to accomodate that)... and now its somehow still looking for that? Time Periods as a dimension does not show up in the reversal -- only the individual months named above.
    Any ideas on this one??

    John -- I extracted the data to a file and created a data load rule in Essbase to load the data. All dimensions present and accounted for (five header items as similar here) and everything loads fine.
    So not sure what else is wrong -- still getting the missing dimension error.
    Any other thoughts?? Here's the entire error message. Thanks for all your help on this.
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 23, in ?
    com.hyperion.odi.essbase.ODIEssbaseException: Missing standard dimension column for data load
         at com.hyperion.odi.essbase.ODIEssbaseDataWriter.loadData(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java)
         at org.python.core.PyMethod.__call__(PyMethod.java)
         at org.python.core.PyObject.__call__(PyObject.java)
         at org.python.core.PyInstance.invoke(PyInstance.java)
         at org.python.pycode._pyx8.f$0(<string>:23)
         at org.python.pycode._pyx8.call_function(<string>)
         at org.python.core.PyTableCode.call(PyTableCode.java)
         at org.python.core.PyCode.call(PyCode.java)
         at org.python.core.Py.runCode(Py.java)
         at org.python.core.Py.exec(Py.java)
         at org.python.util.PythonInterpreter.exec(PythonInterpreter.java)
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:144)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.k(e.java)
         at com.sunopsis.dwg.cmd.g.A(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)
    Caused by: com.hyperion.odi.essbase.ODIEssbaseException: Missing standard dimension column for data load
         at com.hyperion.odi.essbase.ODIEssbaseDataWriter.validateColumns(Unknown Source)
         ... 32 more
    com.hyperion.odi.essbase.ODIEssbaseException: com.hyperion.odi.essbase.ODIEssbaseException: Missing standard dimension column for data load
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.k(e.java)
         at com.sunopsis.dwg.cmd.g.A(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)

  • Takes Long time for Data Loading.

    Hi All,
    Good Morning.. I am new to SDN.
    Currently i am using the datasource 0CRM_SRV_PROCESS_H and it contains 225 fields. Currently i am using around 40 fields in my report.
    Can i hide the remaining fields in the datasource level itself (TCODE : RSA6)
    Currently data loading takes more time to load the data from PSA to ODS (ODS 1).
    And also right now i am pulling some data from another ODS(ODS 2)(LookUP). It takes long time to update the data in Active data table of the ODS.
    Can you please suggest how to improve the performance of dataloading on this Case.
    Thanks & Regards,
    Siva.

    Hi....
    Yes...u can hide..........just Check the hide box for those fields.......R u in BI 7.0 or BW...........whatever ........is the no of records is huge?
    If so u can split the records and execute............I mean use the same IP...........just execute it with different selections.........
    Check in ST04............is there are any locks or lockwaits..........if so...........Go to SM37 >> Check whether any Long running job is there or not.........then check whether that job is progressing or not............double click on the Job >> From the Job details copy the PID..............go to ST04 .....expand the node............and check whether u r able to find that PID there or not.........
    Also check System log in SM21............and shortdumps in ST04........
    Now to improve performance...........u can try to increase the virtual memory or servers.........if possiblr........it will increase the number of work process..........since if many jobs run at a time .then there will be no free Work prrocesses to proceed........
    Regards,
    Debjani......

  • Increase the number of background work processes for data load performance

    Hi all,
    There are 10 available background work processes in the BW system. We're doing some mass load to multiple ODS.But system uses only 3 background processes. How can i
    increase the number of used background work processes for new data load.
    I tried to change number of prosesses with RSODSO_SETTINGS. But no successes. Are there any other settings need to change?
    thanks,
    Yigit

    Hi sankar,
    I entered the max proc. number into ROIDOCPRMS. But it doesn't make difference. System still uses only 3 of background processes. RSCUSTA2 is replaced with
    RSODSO_SETTINGS in BI 7.0 and this trans. can only change the processes for data activation, SID generation and rollback. I need to change the process numbers for data extraction.

  • Underlying RS Table :: Date & Time for Data Load Requests

    Dear SAP BW Community,
    In BW 3.5, does anybody know the underlying "RS" table where I can check to see the earliest date & time which a data target was loaded, by providing the data target's technical name in SE16 ?
    Thanks!

    OK, I've found the timestamp the data load requests in the table RSMONICDP.
    To get the earliest data load for infoCube FRED, I'm going to Oracle via SQL*Plus, as follows:
    select to_char(min(TIMESTAMP)) from sapr3.RSMONICDP where ICUBE = 'FRED' ;

  • Where Clause in Table Lookups for Data Load

    Hello,
    In Shared Components I created in Data Load Table. In this Data Load Table I added a Table Lookup. On the page to edit the Table Lookup, there is a field called Where Clause. I tried to add a Where Clause to my Table Lookup in this field but it seems that it has no effect on the Data Load process.
    Does someone know how to use this Where Clause field?
    Thanks,
    Seb

    Hi,
    I'm having the same problem with the where clause being ignored in the table lookup, is this a bug and if so is there a work around?
    Thanks in advance

  • Add Data Source for Data Load Rule

    I created an ODBC system data source in both the 32-bit and 64-bit ODBC managers (just to be sure), but they're not showing up as options for data sources in the SQL interface in EAS. Is there some step that I'm missing?
    Thanks.

    Have you created the ODBC on the Essbase server or on the server hosting EAS, it should be on the Essbase server.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Suggest good strategy for data load through standard datasources

    Hi BW Gurus,
    we currently are using standard purhasing related datasources. We forsee new reports coming in later based on the standard datasources.
    Can you please suggest a good general startegy to follow to bring in R/3 data. Our concerns are towards data loads [ initializations etc..] as some of the standard datasources are already in production.
    please advice.

    Hi
    go through these web-blogs -  From Roberto Negro it may help you.
    /people/sap.user72/blog/2004/12/16/logistic-cockpit-delta-mechanism--episode-one-v3-update-the-145serializer146
    /people/sap.user72/blog/2004/12/23/logistic-cockpit-delta-mechanism--episode-two-v3-update-when-some-problems-can-occur
    /people/sap.user72/blog/2005/01/19/logistic-cockpit-delta-mechanism--episode-three-the-new-update-methods
    Regards,
    Rajesh.

  • Last day of previous month for data load

    Hi,
    I have to load data from the previous month into the psa and then into an infocube. I was wondering as to how to get the last of the previous month to write a code in ABAP. I will be writing the code at the infopackage level in the data selection. I could load data from the 1st of the previous month to the 1st of the current month. This will be an additional load of 30,000 records for the 1st of every month, since I will be loading 30,000 records everyday, I was wondering if I could limit the load from the 1st of every previous month to the last day of that month. This will be a repetitive loading.
    DATA: CURR_MM(2) TYPE N,
    CURR_YYYY(4) TYPE N,
    CURR_DD(2) TYPE N,
    PREV_MM(2) TYPE N,
    PREV_YYYY(4) TYPE N,
    PREV_DD(2) TYPE N,
    YYYY_MM(6),
    YYYY_MM1(6),
    DATE LIKE SY-DATUM.
    DATE = SY-DATUM.
    CURR_YYYY = DATE+0(4).
    CURR_MM = DATE+4(2).
    CURR_DD = DATE+6(2).
    PREV_DD = 1.
    IF CURR_MM = '01'.
    PREV_MM = '12'.
    PREV_YYYY = CURR_YYYY - 1.
    ELSE.
    PREV_MM = CURR_MM - 1.
    PREV_YYYY = CURR_YYYY.
    ENDIF.
    concatenate PREV_YYYY PREV_MM PREV_DD into YYYY_MM.
    concatenate CURR_YYYY PREV_MM PREV_DD into YYYY_MM1.
    read table l_t_range with key
    fieldname = 'BLDAT'.
    l_idx = sy-tabix.
    l_t_range-low = YYYY_MM.
    l_t_range-high = YYYY_MM1.
    l_t_range-sign = 'I'.
    l_t_range-option = 'BT'.
    modify l_t_range index l_idx.
    p_subrc = 0.
    Mind you this code will load data from 1st of the previous month to the 1st of current month. I just don't want to load that extra "1st day" of current month data as I have 30,000 records everyday.
    Say for example, I want to load data from 1st Mar to 31st Mar or 1st Feb to 28thFeb. How should I modify the above code.
    Is there a formula to get the last date of the previous month. That's all I need. This would solve the problem.

    try this routine. it will return a range from 1st day to end of the month.
    DATA: l_s_range TYPE rsr_s_rangesid,
              E_T_RANGE TYPE  RSR_T_RANGESID.
    DATA: year(4) TYPE n,
          month(2) TYPE n,
          day(2) TYPE n,
        ld_keydate  TYPE sydatum,
          ld_lastday  TYPE sydatum.
      REFRESH e_t_range.
      CLEAR l_s_range.
      year  = sy-datum(4).
      month = sy-datum+4(2).
    *Months with 31 days in year
      IF month = '01' OR
         month = '03' OR
         month = '05' OR
         month = '07' OR
         month = '08' OR
         month = '10' OR
         month ='12'.
        day = '31'.
      ENDIF.
    *check for leap year: provoking sy-sybrc <> 0
      IF month = '02'.
        day = '29'.
        MOVE:   '02'        TO ld_keydate+4(2),
                year        TO ld_keydate(4),
                day         TO ld_keydate+6(2).
        CALL FUNCTION 'DATE_CHECK_PLAUSIBILITY'
          EXPORTING
            date   = ld_keydate
          EXCEPTIONS
            OTHERS = 1.
        IF sy-subrc <> 0.
          day = '28'.
        ENDIF.
      ENDIF.
    *months with 31 days in year
      IF month = '04' OR
         month = '06' OR
         month = '09' OR
         month = '11'.
        day = '30'.
      ENDIF.
      MOVE: year              TO ld_lastday(4),
            month             TO ld_lastday+4(2),
            day               TO ld_lastday+6(2).
      l_s_range-low  = sy-datum.
      l_s_range-high = ld_lastday.
      l_s_range-sign = 'I'.
      l_s_range-opt  = 'BT'.
      APPEND l_s_range TO e_t_range.

  • Changing the File path for SQL Loader Recognition

    I am learning how to create a control file. The names.ctl file was placed in "Names" folder in my "C:\Windows" file.
    I get the following error when trying to run the script for sqlldr:
    Sql*Loader-500 Unable to open file.
    Sql*Loader-553 file not found
    Sql*Loader-System error: the system cannot find the specified file.
    The path on the folder in c:\Windows\names\names.ctl
    How do I make SQL Loader recognize it?

    Pl post details of OS and database versions. Have you tried this ?
    sqlldr CONTROL=c:\Windows\names\names.ctl ...HTH
    Srini

  • Essbase 9.3.1, more time taken for data load.

    Hi,
    i am trying to load 15gb of data, (data is taken from two seperate database of oracle) to my ASO application directly.
    the load is very slow, what factors should i consider to make the load faster?
    Will incresing the RAM size help me in this context?
    i have gone through the admin doc of Essbase 9.3.1, in that the Hard-Disk and the RAM requirement is given, but it is for the Block storage.
    is there any difference between this estimation and the ASO estimation?
    If anyone can guide me, what things has to be taken care while loading the cube with huge data.
    i shall be very thankful.

    The statements which matters for the Aggregate storage is
    DLSINGLETHREADPERSTAGE FALSE
    DLTHREADSPREPARE Sample Basic 3
    The write(DLTHREADSWRITE Sample Basic 4) option doesnt have any impact on the writing speed of the data in the ASO.
    The statements has to be included in the essbase.cfg file, which is the configuration file of the Essbase server.
    Well as per the document this should be done throught the maxl ,Esscmd or the Analytic services console?
    Question?
    1) Should we simply put this statements in the essbase.cfg file without a semicolon? and restart the server/application?
    2) if these statement can be executed by the maxl? Please let us know how can we do that?
    3) How do we know what is the current level of thread being used for the Read/write?
    Thanks

  • Extraction problem - selection conditions for data load using abap program

    Hi All,
           I have a problem loading data over a selected period where the selection of date range is done using ABAP routine (type 6). Here though in the request header tab in monitor screen i'm able to see the selection date range populated correctly, no records are being extracted. But if i delete the abap filter and directly give the same date range for selection we are able to extract data. if any body has faced similar problem and have a solution for it please help me with yur suggestion.
    Thanks,
    nithin.

    It seems the the data range is not properly set in the routine.
    You can check the value of selection period generated by routine in the data selection tab-> execute button is there .
    Click it to to test the selection values generated by the ABAP routine..
    If the value here seems correct one then paste the code of the routine that u have written with brief logic details that u have applied.
    Sonal.....

  • Selection for data load

    hi friends,
    I have a datasource which contained 25 fields, in that 25 fields, we have selected only 4 fields as key fields.this datasource used in a ods and all are work fine..... so, it was moved into production..
    now due to the data capacity in production, we want to load data by a selection of date which is not a key of a datasource.... we need initial in BW side, how can we get data for certain period only..... guide me and help me...
    with hopes,
    Jaya

    Hi Jaya,
    I hope u have data in ur source..
    and I hope u have any date field as selection in ur infopackage.....
    if u have date field(fiscal year period, cal month like this) in the selection tab of Infopackage then u can use them for loading to ur DSO...... This way u can do the multiple loads into ur DSO.. and u ensure that there will be less load in the source as well...
    take these selective loads as repair full into ur DSO.. once after this u can perform the init-without data tranfer..
    and the proceed with the deltas regularly later on..
    thanks
    Assign points if thishelps

  • OWB 10g - The time taken for data load is too high

    I am loading data on the test datawarehouse server. The time taken for loading data is very high. The size of data is around 7 GB (size of flat files on the OS).
    The time it takes to load the same amount of data on the production server from the staging area to the presentation area(datawarehouse) is close to 8 hours maximum.
    But, in the test environment the time taken to execute one mapping (containing 300,000 records)is itself 8 hours.
    The version of Oracle database on both the test and production servers is the same i.e., Oracle 9i.
    The configuration of the production server is : 4 Pentium III processors (2.7 GHz each), 2 GB RAM, Windows 2000 Advanced Server, 8 kilobyte primary memory cache, 512 kilobyte secondary memory cache, 440.05 Gigabytes Usable Hard Drive Capacity, 73.06 Gigabytes Hard Drive Free Space
    The configuration of the test server is : 4 Pentium III processors (2.4 GHz each), 1 GB RAM, Windows 2000 Advanced Server, 8 kilobyte primary memory cache,
    512 kilobyte secondary memory cache, 144.96 Gigabytes Usable Hard Drive Capacity, 5.22 Gigabytes Hard Drive Free Space.
    Can you guys please help me to detect the the possible causes of such erratic behaviour of the OWB 10g Tool.
    Thanks & Best Regards,
    Harshad Borgaonkar
    PwC

    Hello Harshad,
    2 GB of RAM doesn't seem to be very much to me. I guess your bottleneck is I/O. You've got to investigate this (keep an eye on long running processes). You didn't say very much about your target database design. Do you have a lot of indexes on the target tables and if so have you tried to drop them before loading? Do your OWB mappings require a lot of lookups (then apropriate indexes on the lookup table are very useful)? Do you use external tables? Are you talking about loading dimension or fact tables or both? You've got to supply some more information so that we can help you better.
    Regards,
    Jörg

Maybe you are looking for

  • PDF/X makes the scrolling speed of ClearScan PDFs much faster

    Hello all, I am using 9 and X (trial) to find the best solution for reading scanned documents on Mac and iPad. I recently found that ClearScan OCR makes scanned black and white PDFs very light. Consequently, scrolling of PDFs on Mac and iPad becomes

  • All contacts varnished

    Hi, i have a problem with iPhone 5. Suddenly, all my contacts dissapeared, i do not have even single number in my phonebook. And i didnt even touch the phone or play with settings. Is it possible it has something to do with updating my iPad's softwar

  • Need to add leading zeros to the field if field length is less than 6

    Actually the field length is 16. But for some manual entries it is comming as 4 chars. so if field length is less than 6 then it should be replaced by leading zeros. Moderator message: very basic, please (re)search yourself before asking, @all: pleas

  • Maximum size of RSS file

    I note that my rss - http://punkcast.com/podcast.xml - shows in the iTunes store but doesn't seem to work in iTunes itself. I'm wondering if it's because it's got too big - it's around 100k. I seem to remember reading somewhere that there is a limit.

  • FSG output in excel

    hello all, Is there any way to publish FSG report in excel? I remember we had one report to generate FSG in excel in 11i but that report I can not see in R12. Is there any patch that I need to apply? Thanks DJ