Oracle Pre-Load Actions error

HI All,
I am trying to install SAP R/3 4.7 IDES, Oracle 9 with patch 9.2.0.3.0 on 2003 Server SP2. I have encountered the following error during Database Instance installation step of Oracle Pre-load Actions .
CJS-00084 SQL Statement or Script failed. Error Message: Executable C:\oracle\ora92/bin/sqlplus.exe returns 3.
ERROR 2007-06-13 10:25:46
FJS-00012 Error when executing script.
I have already tried following solutions.
1) directory C:\oracle\ora92/bin/sqlplus.exe is there I able to connect as sysdba
2) I changed “C:\SAPinst_Oracle_KERNEL" name with no spaces
3) Installed Oracle without installing Database.
I think I am making a mistake in following two steps.
1) I did installed Ms Loop Back adapter but what exact entry I need to put in Host file?
2) When I installed Oracle and run “Sapserver.cmd” file, windows message popup and ask which program I want to use to open “Sapserver.rsp” file and I choose notepad.
After Oracle installed, I run some Sqlplus commands and able to connect as sysdba but listener status command showing one listener is running and other status unknow. After installation stop, Listener status giving TNS adapter error.
Please help.

>> have already tried following solutions.
>>1) directory C:\oracle\ora92/bin/sqlplus.exe is there I able to connect as sysdba
>>2) I changed “C:\SAPinst_Oracle_KERNEL" name with no spaces
>>3) Installed Oracle without installing Database.
never change the directory name of SAPinst working directory. It is designed in this version to use a directory with spaces and it will work with it.
What you should not do is to copy additional CDs (like Export, Kernel) into directories having blanks in the path.
>>1) I did installed Ms Loop Back adapter but what exact entry I need to put in Host file?
>>2) When I installed Oracle and run “Sapserver.cmd” file, windows message popup and ask which program I want to use to open “Sapserver.rsp” file and I choose notepad.
That should not happen at all.
The response file is opened by Oracle's setup.exe which is invoked by sapserver.cmd. Is the Oracle CD located on a directory containing blanks?
Peter

Similar Messages

  • Problems with the Oracle Pre-Load actions in an homogeneous system copy

    Hi,
    I am trying to do a NW04 homogeneous system copy, and I am having troubles with the ABAP copy. (Oracle Pre-Load Actions step)
    I am with the database specific procedure, so I am creating the new system until the sapinst asks me to bring an online-offline backup and the control file, then the sapinst tries to recover the database by himself, but he finds trouble:
    Error: CJS-00084 SQL statement or script failed.<br>DIAGNOSIS: Error message: ORA-01194: file 1 needs more recovery to be consistent Ora-01110: data file 1: 'oracle/XIE...'
    If I try to do the recover manually I have to make the usual:
    RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE;
    CANCEL
    ALTER DATABASE OPEN RESETLOGS;
    If I follow the sapinst.log I find that the sapinst tries to recover the database but fails with the above message and:
    ORA-01589: must use RESETLOGS or NORESETLOGS option for database open
    Yes, that's true, if I make it manually I have to open the database with the 'alter database open resetlogs';
    the problem could be fixed modifying the run_control.sql statement that the SAPINST creates at the working directory, but no modification to that statement is possible because the SAPINST deletes and creates the statement IMMEDIATELY before executing it, so what can I do?
    Many thanks
    Mario

    SOLVED
    for your info, the run_control.sql is UNMODIFIABLE, but I could modify the CONTROL.SQL that he calls, so I recovered manually the Database, and the CONTROL.SQL was a simply CONNECT and then OPEN DATABASE, just like that, so I could continue with my installation, another point for us against the damned SAPINST !!!

  • MOS-01012 during Oracle Pre-Load phase system copy

    Hello,
    The problem is in phase 'Oracle Pre-Load Actions', we have error:
    MOS-01012  PROBLEM: '/sapmnt/Q02/exe/saplicense -R3Setup Q02 "OMLRUURS" TRACE=2' returned with '254' which is not a defined as a success code.
    Homogenous system copy, AIX, ORA 9.2.0.8
    I've checked a few things:
    - when I manually execute this command I have:
    Fri Jan 28 17:04:02 2011
    MtxInit: 0 0 0
    - R3trans -d is ok (rc 0000)
    - Q02 database is up
    - oracle listener is up
    - saplicense -show
    LICENSE system: Q02 hardware key: N1153195602 expiration_date: 99991231
            installation no: 0110004123 key: xxxxx
            userlimit: 0 productid: R3_ORA
            system-nr: 000000000310644498
    license useable ***
    - saplicense -get (returns N1153195602)
    - saplicense.log contain:
    Fri Jan 28 17:04:02 2011
    MtxInit: 0 0 0
    I'm stuck, no more idea...
    Best regards,
    MIchal
    dev_slic contains:
    SlicIGetDate: <20110129>
    SlicPwForR3Setup: calc password out of <Q0220110129FJAWFNLTL>
    SlicIGetDate: <20110129>
    SlicSapInstall: sysname: >Q02< connect: >1< rollback: >1<
    SlicIDbLock: first call to SlicIDbLock: initialize Mutex
    db_con_init called
    create_con (con_name=R/3)
    Loading DB library '/sapmnt/Q02/exe/dboraslib.o' ...
    load shared library (/sapmnt/Q02/exe/dboraslib.o), hdl 0
    Library '/sapmnt/Q02/exe/dboraslib.o' loaded
    function DbSlExpFuns loaded from library /sapmnt/Q02/exe/dboraslib.o
    Version of '/sapmnt/Q02/exe/dboraslib.o' is "640.00", patchlevel (0.220)
    function dsql_db_init loaded from library /sapmnt/Q02/exe/dboraslib.o
    function dbdd_exp_funs loaded from library /sapmnt/Q02/exe/dboraslib.o
    New connection 0 created
    0: name = R/3, con_id = -000000001 state = DISCONNECTED, perm = YES, reco = NO , timeout = 000, con_max = 255, con_opt = 255, occ = NO
    db_con_connect (con_name=R/3)
    find_con_by_name found the following connection for reuse:
    0: name = R/3, con_id = 000000000 state = DISCONNECTED, perm = YES, reco = NO , timeout = 000, con_max = 255, con_opt = 255, occ = NO
    Got ORACLE_HOME=/oracle/Q02/920_64 from environment
    -->oci_initialize (con_hdl=0)
    got NLS_LANG='AMERICAN_AMERICA.UTF8' from environment
    ERROR => OCI-call 'OCIEnvCreate(mode=16384)' failed: rc = -1 [dboci.c      2287]
       set_ocica() -> OCI or SQL return code -1
    ERROR => OCI-call 'OCIErrorGet' failed: rc = -2 [dboci.c      2081]
    -->oci_initialize (con_hdl=0)
    got NLS_LANG='AMERICAN_AMERICA.UTF8' from environment
    Client NLS settings:
    Logon as OPS$-user to get SAPQ02's password
    Connecting as /@Q02 on connection 0 (nls_hdl 0) ... (dbsl 640 070308)
    Nls CharacterSet                 NationalCharSet              C      EnvHp      ErrHp ErrHpBatch
      0                                                           0      (nil)      (nil)      (nil)
    Allocating service context handle for con_hdl=0
    ERROR => OCI-call 'OCIHandleAlloc' failed: rc = -2 [dboci.c      2830]
    ERROR => CONNECT failed with sql error '-2' [dbsloci.c    11395]
       set_ocica() -> OCI or SQL return code -2
    ERROR => OCI-call 'OCIErrorGet' failed: rc = -2 [dboci.c      2081]
    Try to connect with default password
    Connecting as SAPQ02/<pwd>@Q02 on connection 0 (nls_hdl 0) ... (dbsl 640 070308)
    Nls CharacterSet                 NationalCharSet              C      EnvHp      ErrHp ErrHpBatch
      0                                                           0      (nil)      (nil)      (nil)
    Allocating service context handle for con_hdl=0
    ERROR => OCI-call 'OCIHandleAlloc' failed: rc = -2 [dboci.c      2830]
    ERROR => CONNECT failed with sql error '-2' [dbsloci.c    11395]
       set_ocica() -> OCI or SQL return code -2
    ERROR => OCI-call 'OCIErrorGet' failed: rc = -2 [dboci.c      2081]
    Edited by: Michal Sarna on Jan 29, 2011 4:37 PM

    > Got ORACLE_HOME=/oracle/Q02/920_64 from environment
    > -->oci_initialize (con_hdl=0)
    > got NLS_LANG='AMERICAN_AMERICA.UTF8' from environment
    > *** ERROR => OCI-call 'OCIEnvCreate(mode=16384)' failed: rc = -1 [dboci.c      2287]
    >    set_ocica() -> OCI or SQL return code -1
    > *** ERROR => OCI-call 'OCIErrorGet' failed: rc = -2 [dboci.c      2081]
    > -->oci_initialize (con_hdl=0)
    Did you install the Oracle 9.2.0.8 client? Or did you link the binaries from the ORACLE_HOME? Or are you using the Oracle client from the installation DVD?
    Markus

  • MS Access to ORacle 8i - loading data Error

    I'm able to create all the access db objects in an Oracle 8i database.
    When I try to load the data I get this in the error.log file:
    EXCEPTION : LoadTableData.run() : [Microsoft][ODBC Driver Manager] Driver does not support this function
    I have MSAccess97 SR-2 installed. My ODBC Microsoft access driver is 4.00.420200
    Any ideas of what I'm doing wrong.
    Thanks!
    Patricia Pierson
    IBM Global Services DBA
    303.571.6380
    [email protected]
    null

    Hi ,
    Please e-mail [email protected] with your issue.
    Regards
    John

  • Photoshop CS6 Not Loading: Actions & Preferences Errors

    Photoshop CS6 has been working like a dream, and recently it has pushed some very large files.
    But, upon trying to open Photoshop this morning, I get these two messages:
    Could not load actions because an unexpected end-of-file was encountered.
    Could not initialize Photoshop because the Preferences file was invalid (it has been deleted).
              - although the file was still in the AppData...Adobe Photoshop CS6 Settings.
    Tried to 'restore' Preferences:
    Keyboard - CTRL ALT SHFT upon clicking PSD icon
    Said 'Yes' to allow changes to system
    Still received same two messages
    Removed the Adobe Photoshop CS6 Prefs.psp
              Restarted Photoshop, still received same two messages.
              Tried resetting, CTL ALT SHFT, again still same two messages.
    Read Adobe thread to 'rename' the Actions Palette.psp, but keep it in the folder:
    I renamed it 'renamed' and left the Adobe Photoshop CS6 Prefs.psp in the folder as well.
    Did not receive the 'load actions' error, but
    Still received the 'Preference file was invalid' message.
    Tried CTRL ALT SHFT to reset,
    Still received the 'Preference file was invalid' message.
    Left the Actions as 'renamed', but removed the Adobe Photoshop CS6 Prefs.psp file as well.
    Still received the 'Preference file was invalid' message.
    Tried CTRL ALT SHFT to reset,
    Still received the 'Preference file was invalid' message.
    Do I need to do a re-install?
    Thanks bunches,

    I am having only the single issue:
    Could not initialize Photoshop because preferences files was invalid (it has been deleted).
    What happened was that I had to do a force quit (Mac OSX 10.8.4) due to Spotlight and Parallels  Win 7 running a system slowdown.   I don't believe my new HD is corrupted. BUt there has to be a way to get PShop working again..  I've tried shift start and not loading 3rd party plugins, but nothing is working.
    UPDATE:  Download and reinstall fixed it after uninstalling and removing preferences.  However, you can't use the CC Applications Manager to do so. You have to do a direct download.
    Message was edited by: jefferis

  • SQL*Loader-925: Error while uldlfca: OCIStmtExecute (ptc_hp)

    Hi my table loading is failing with the floowing error. But same table is loading daily without any problem. I tried again but failed with the same message.
    SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
    SQL*Loader-925: Error while uldlfca: OCIStmtExecute (ptc_hp)
    ORA-03114: not connected to ORACLE
    SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
    SQL*Loader-925: Error while uldlgs: OCIStmtExecute (ptc_hp)
    ORA-03114: not connected to ORACLE
    SQL*Loader-925: Error while uldlgs: OCIStmtFetch (ptc_hp)
    ORA-24338: statement handle not executed
    How to solve this problem?
    Thanks
    Prashanth

    user614414 wrote:
    Hi BluShadow,
    Nothing is changed recently. Loading is happening for other files at sametime, when this loading is failed. I have renamed and recreated the table, then it is working fine. similar problem occured today for some other table.
    May i know, for any other reason it will happen?You've only supplied us with an error message and that's what the error message indicates.
    If you want more help, you need to supply more details.

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

  • I loaded actions into my PSE 10, then when I restarted my PC (windows 7) and PSE 10 I got a Runtime Error (Microsoft Visual C    Runtime Library)

    I loaded actions into my PSE 10, then when I restarted my PC (windows 7) and PSE 10 I got a Runtime Error (Microsoft Visual C    Runtime Library)

    Are these actions you put into the C:\ProgramData\Adobe\Photoshop Elements\10.0\Photo Creations\photo effects folder?
    If so, did you delete both the
    ThumbDatabase.db3  in C:\ProgramData\Adobe\Photoshop Elements\10.0
    and
    MediaDatabase.db3 in C:\ProgramData\Adobe\Photoshop Elements\10.0\Locale\en_us
    after installing the actions?
    Then let the pse 10 editor run for about 10 to 20 minutes to rebuild the effects database.

  • I am getting a photoshop error could not load photoshop because the preferences file was invalid and cln't load actions because of an unexpected end of file encountered.

    I am getting a photoshop error could not load photoshop because the preferences file was invalid and cln't load actions because of an unexpected end of file encountered.

    Are you trying to open Photoshop directly, or open it by double clicking on a Photoshop document, like a PSD file?  If the former, then you might need to reinstall.  Resetting Preferences would be the firs step, but I am not sure if that will work if Photoshop is not opening in the first place.
    Try doing so anyway.  Hold down Shift Alt Ctrl (Shift Opt Cmd) while opening Photoshop. OK the message to delete Preferences.
    If you get the unexpected end of file error while trying to open Photoshop via a PSD document, then there is more than a good chance that that file is toast.
    If still stuck, we need to know Operating System, and Photoshop version.

  • Pre-loaded windows 8.1 error

    To start of with , may I disclose to you that I am a complete novice to computers and do not understand any of the tech names used in these forums ,   I have a G500 , brought from new , with windows 8.1 pre-loaded . I have used it without problem for 14 months now , but have recently been recieving messages to activate windows . I contacted Microsoft support , and they tried to remotely activate windows for me , but were unable to . They told me that a error in the licence that was pre-injected in the motherboard is the cause .   Has anybody any idea how to correct this ? Microsoft told me that the error was already there before I brought the computer ,   Any help would be much appreciated

    Hello     thanx for answering     I purchased the computer from a large , reputable company in the uk called littlewoods in april 2014 , I cannot discover the status of my warrenty , as when I put my computers serial number in the warrenty checker , it informs me that it does not recognise it . I believe though , that it came with a 12 month warranty . The lady at microsoft support informed me that the best way to sort this out ,would be to purchase a new windows 8.1 . But ,as she told me that the error was already in the computer before I purchased it , this seems a little unfair , as the fault is not due to anything I have done .

  • Error Could not pre-load servlet: MessageBrokerServlet

    I get the following errors during LCDS startup:
    error Could not pre-load servlet: MessageBrokerServlet
    [1]java.lang.UnsupportedClassVersionError: flexdev/FundciteAssembler (Unsupported major.minor version 50.0)
    [Flex] Error instantiating application scoped instance of type 'flexdev.FundciteAssembler' for destination 'fundcite'.
    java.lang.UnsupportedClassVersionError: flexdev/FundciteAssembler (Unsupported major.minor version 50.0)
    I have a MySQL server running and I am trying to connect to it through java. Any idea what this error means?

    "Unsupported major.minor version" means you are using a JVM version that is not supported.
    List of supported JVMs can be found here:
    http://www.adobe.com/products/livecycle/systemreqs.html#item-02

  • Error in Load Action Script

    Hi All,
    I have an amended Clear Calc script (as per ID 1101084.1) in the Load action script:
    Dim strTYear
    strTYear = API.POVMgr.fPeriodKey(strPer(0)).strTargetYear
    also
    strTPer(0)
    However, these are now both returning [empty]. The Load and Validate work without issue.
    Any pointers welcome.
    Thanks
    Mark

    Hi Tony,
    Thanks for that it worked.
    I had partially completed the Periods Control Table so when I extracted data it worked, however when the code creates the scripts it must parse through all the Periods (or something similiar) and can't handle null entries. Interesting!
    Mark

  • Data Loading using SQL* Loader giving errors..

    While loading the data using SQL* Loader, I came across the following errors:
    - SQL*Loader-00604 Error occurred on an attempt to commit
    - ORA-01041 internal error. hostdef extension doesn't exist
    My Control and Data files have proper Carriage Returns i.e. the last line of both the files is blank.
    So, if somebody know about this, plz help me.
    Thanx

    ORA-00604 error occurred at recursive SQL level string
    Cause: An error occurred while processing a recursive SQL statement (a statement applying to internal dictionary tables).
    Action: If the situation described in the next error on the stack can be corrected, do so; otherwise contact Oracle Support Services
    This kind of error occurs when data dictionary is
    query a lot.
    Joel P�rez

  • Issue with the report oracle.apps.xdo.XDOException:Error creating lock file

    I am getting an error while submitting the XML publisher report in Betsy N0 instance.
    Issue
    Log :- Beginning post-processing of request 145458120 on node BETSYN0DB1 at 22-DEC-2011 05:28:26.
    Post-processing of request 145458120 failed at 22-DEC-2011 05:28:28 with the error message: One or more post-processing actions failed. Consult the OPP service log for details
    Output :- Blank Report
    XML file :- XML file generated without any error
    Output process:-
    [12/23/11 4:28:10 AM] [1213640:RT145480622] Completed post-processing actions for request 145480622.
    [12/23/11 5:16:01 AM] [OPPServiceThread1] Post-processing request 145481427.
    [12/23/11 5:49:30 AM] [OPPServiceThread1] Post-processing request 145481923.
    [12/23/11 5:49:30 AM] [1213640:RT145481923] Executing post-processing actions for request 145481923.
    [12/23/11 5:49:30 AM] [1213640:RT145481923] Starting XML Publisher post-processing action.
    [12/23/11 5:49:30 AM] [1213640:RT145481923]
    Template code: XXWIP_WO_VLVS
    Template app: XXWIP
    Language: en
    Territory: US
    Output type: PDF
    oracle.apps.xdo.XDOException: Error creating lock file
         at oracle.apps.xdo.oa.util.FontCacheManager.getFontFromDB(FontCacheManager.java:320)
         at oracle.apps.xdo.oa.util.FontCacheManager.getFontFilePath(FontCacheManager.java:198)
         at oracle.apps.xdo.oa.util.FontHelper.createFontProperties(FontHelper.java:432)
         at oracle.apps.xdo.oa.util.ConfigHelper.getFontProperties(ConfigHelper.java:166)
         at oracle.apps.xdo.oa.schema.server.TemplateHelper.runProcessTemplate(TemplateHelper.java:5824)
         at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(TemplateHelper.java:3459)
         at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(TemplateHelper.java:3548)
         at oracle.apps.fnd.cp.opp.XMLPublisherProcessor.process(XMLPublisherProcessor.java:247)
         at oracle.apps.fnd.cp.opp.OPPRequestThread.run(OPPRequestThread.java:157)
    [12/23/11 5:49:31 AM] [UNEXPECTED] [1213640:RT145481923] oracle.apps.xdo.XDOException: Error creating lock file
         at oracle.apps.xdo.oa.util.FontCacheManager.getFontFromDB(FontCacheManager.java:320)
         at oracle.apps.xdo.oa.util.FontCacheManager.getFontFilePath(FontCacheManager.java:198)
         at oracle.apps.xdo.oa.util.FontHelper.createFontProperties(FontHelper.java:432)
         at oracle.apps.xdo.oa.util.ConfigHelper.getFontProperties(ConfigHelper.java:166)
         at oracle.apps.xdo.oa.schema.server.TemplateHelper.runProcessTemplate(TemplateHelper.java:5824)
         at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(TemplateHelper.java:3459)
         at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(TemplateHelper.java:3548)
         at oracle.apps.fnd.cp.opp.XMLPublisherProcessor.process(XMLPublisherProcessor.java:247)
         at oracle.apps.fnd.cp.opp.OPPRequestThread.run(OPPRequestThread.java:157)
    [12/23/11 5:49:31 AM] [1213640:RT145481923] Completed post-processing actions for request 145481923.

    The issue is fixed.
    Our XML Publisher TMP is not defined, so it uses the APPLTMP used by concurrent manager. There is a folder xdofonts/<SID> in $APPLTMP. It contains the fonts used by XML publisher and on the of font file (.ttf) was 0 byte. We copied the ttf file from QA. And it started working.
    Best practice is to define diffent TMP for concurrent managers and XML Publisher.
    Cherrish Vaidiyan

  • Epson printers keep giving "media not loaded correctly" error message

    When sending print jobs that consist of multiple copies to Epson printers from my dual 3 GHz Quad-Core Intel Xeon Mac with OS 10.5.8, the printer will frequently (more than half the time) grab a sheet of paper and spit it out with nothing on it, and then return the error message "Media out or not loaded correctly". I've had the exact same issue with their 1280, 2400 and 2880 printers. I've been struggling with and troubleshooting this issue for a couple of months now, and I believe I've narrowed it down to the operating system. It happens when printing from both Photoshop CS3 and CS4. It doesn't happen when I print from another computer running OS 10.4.11.

    Dave Lesko wrote:
    ...the printer will frequently (more than half the time) grab a sheet of paper and spit it out with nothing on it, and then return the error message "Media out or not loaded correctly". I've had the exact same issue with their 1280, 2400 and 2880 printers...It doesn't happen when I print from another computer running OS 10.4.11.
    Interesting observation of the OSs. Have you tried Epson's suggestions for the 1280 printer and 2880 printer?
    BTW: This was a constant issue with my Epson 3000. It happened mostly with paper larger than tabloid size, which is the reason for having a large format printer. There was no difference between versions of the OS. This was with the paper tray, so it may not be relevant to your printers. However, when I first got the printer it came with some "cleaning sheets" to run thru the printer when changing different types of paper to clean the rollers. It seemed to help... What worked best was to pre-load the first sheet of a multiple page document.
    HTH

Maybe you are looking for

  • How to restrict lot numbers for customers

    We have customers who will not accept products from certain suppliers. Is there any solution that impose restriction and modifies the order fulfillment logic enforcing customer/supplier restrictions. eg., We receive the parts against PO/receipt, whil

  • Error loading web page

    Not sure if this is the place for this, but I'll take whatever I can get Am trying to open a web page that opens on a desktop unit www.ngs.noaa.gov/UFCORS/UFCORS2.jsp When I try to open in my laptop, I get: Error page UFCORS2 An error occured in the

  • HT201365 I lost my iPhone and I didn't have the find my iPhone turned on , is there anything I can do so people can't use it or get into it?

    I lost my iPhone and I believe someone has found it but hasn't returned it I never turned on the find my iPhone is there anything else I can do to find it? The reason I believe some has it Is because of my pictures on my iPad they took pictures and m

  • Use sap webdispatcher with multiple hosts

    I need to support 2 different senerios with our SAP Webdispatcher.  I need to have some users accessing services on our backend ECC system using SSL between the SAP Web Dispatcher and the client.  I have another group of users that need to use a diff

  • TimesTen not releasing shared memory even after DB destroy

    Hi, After TimesTen DB is destroyed, the shared memory allocated to DB is not getting released by system. We are using TimesTen Release 11.2.1.7.0 (64 bit Linux/x86_64) We need to do a system reboot for clearing the shared memory (stale) usage by Time