Need help in understanding of a certain errormessage

Hello everybody,
I get the following error message:
"Kein Speicher der Länge 720 für OCCURS-Bereich verfügbar"
( I assume in English that would be: "No Memory of Lenght 720 for OCCURS-Area available")
Can anyone give me a hint what that means and what I can do to avoid this error?
Regards

Is this happening when you open Crystal Reports Designer or a report?
Check your anti-virus software or possibly your firewall, CR tries to get to our home page to update the Start Page of the designer with links to updates, samples etc.
Don

Similar Messages

  • Need help in understanding the error ORA-01843: not a valid month - ECX_ACT

    Hello All,
    We need help in understanding the Transaction Monitor -> Processing Message (error "ORA-01843: not a valid month - ECX_ACTIONS.GET_CONVERTED_DATE").
    And how to enable the log for Transaction Monitor -> Processing Logfile.
    Actually we are trying to import the Purchase Order XML (OAG) into eBusiness Suite via BPEL Process Manager using the Oracle Applications Adapter. The process is working fine with expected payload until it reaches the XML Gateway Transaction Monitor, where we are getting this error.
    thanks
    muthu.

    Hello All,
    We need help in understanding the Transaction Monitor -> Processing Message (error "ORA-01843: not a valid month - ECX_ACTIONS.GET_CONVERTED_DATE").
    And how to enable the log for Transaction Monitor -> Processing Logfile.
    Actually we are trying to import the Purchase Order XML (OAG) into eBusiness Suite via BPEL Process Manager using the Oracle Applications Adapter. The process is working fine with expected payload until it reaches the XML Gateway Transaction Monitor, where we are getting this error.
    thanks
    muthu.

  • I need help in understanding the customization of Landscape in R/3.

    I need help in understanding the customization of Landscape in R/3. Setup of SAP Landscape from an SAP SD point of view. Being as SAP SD consultant what would be my role in customizing the Landscape server. Help needed. Thx

    Hi,
    In a standard SAP project implementation, the 3 standard transport procedures are:
    Development System (DEV) --> QA System (QAS) --> Production System (PRD)
    In the above structure, the Training Client (TRN) could be made from the copy of PRD (after when real-time master data has been available) or from QA system (where configuration has been tested in DEV client, and the master data is uploaded manually for training purposes)
    Sandbox (standalone): This can be refreshed with Golden Client to reflect the latest configuration performed to facilitate the development/testing purposes.
    -Development (DEV): Where all system configurations and development activities are carried out.
    -Quality Assurance (QAS): Where functional testing is carried out. The System Integration Testing (carried out by the -Development Team) and the User Acceptance Testing (carried out by XXX appointed personnel) is carried out in this server.
    -Training (TRN): End Users are trained on this server.
    -Production (PRD): After the System is commissioned all data entry and administrative functions will be carried out in this server.
    This is by far the standard landscape architecture that is adopted and practiced in most implementations.
    Hope the above helps.
    Thanks.

  • Need help for understanding the behaviour of these 2 queries....

    Hi,
    I need your help for understanding the behaviour of following two queries.
    The requirement is to repeat the values of the column in a table random no of times.
    Eg. A table xyz is like -
    create table xyz as
    select 'A' || rownum my_col
    from all_objects
    where rownum < 6;
    my_col
    A1
    A2
    A3
    A4
    A5
    I want to repeat each of these values (A1, A2,...A5) multiple times - randomly decide. I have written the following query..
    with x as (select my_col, trunc(dbms_random.value(1,6)) repeat from xyz),
    y as (select level lvl from dual connect by level < 6)
    select my_col, lvl
    from x, y
    where lvl <= repeat
    order by my_col, lvl
    It gives output like
    my_col lvl
    A1     1
    A1     3
    A1     5
    A2     1
    A2     3
    A2     5
    A3     1
    A3     3
    A3     5
    A4     1
    A4     3
    A4     5
    A5     1
    A5     3
    A5     5
    Here in the output, I am not getting rows like
    A1     2
    A1     4
    A2     2
    A2     4
    Also, it has generated the same set of records for all the values (A1, A2,...,A5).
    Now, if I store the randomly-decided value in the table like ---
    create table xyz as
    select 'A' || rownum my_col, trunc(dbms_random.value(1,6)) repeat
    from all_objects
    where rownum < 6;
    my_col repeat
    A1     4
    A2     1
    A3     5
    A4     2
    A5     2
    And then run the query,
    with x as (select my_col, repeat from xyz),
    y as (select level lvl from dual connect by level < 6)
    select my_col, lvl
    from x, y
    where lvl <= repeat
    order by my_col, lvl
    I will get the output, exactly what I want ---
    my_col ....lvl
    A1     1
    A1     2
    A1     3
    A1     4
    A2     1
    A3     1
    A3     2
    A3     3
    A3     4
    A3     5
    A4     1
    A4     2
    A5     1
    A5     2
    Why the first approach do not generate such output?
    How can I get such a result without storing the repeat values?

    If I've understood your requirement, the below will achieve it:
    SQL> create table test(test varchar2(10));
    Table created.
    SQL> insert into test values('&test');
    Enter value for test: bob
    old   1: insert into test values('&test')
    new   1: insert into test values('bob')
    1 row created.
    SQL> insert into test values('&test');
    Enter value for test: terry
    old   1: insert into test values('&test')
    new   1: insert into test values('terry')
    1 row created.
    SQL> insert into test values('&test');
    Enter value for test: steve
    old   1: insert into test values('&test')
    new   1: insert into test values('steve')
    1 row created.
    SQL> insert into test values('&test');
    Enter value for test: roger
    old   1: insert into test values('&test')
    new   1: insert into test values('roger')
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select lpad(test,(ceil(dbms_random.value*10))*length(test),test) from test;
    LPAD(TEST,(CEIL(DBMS_RANDOM.VALUE*10))*LENGTH(TEST),TEST)
    bobbobbobbobbobbobbobbobbobbob
    terryterry
    stevestevesteve
    rogerrogerrogerrogerrogerrogerrogerrogerrogerYou can alter the value of 10 in the SQL if you want the potential for a higher number of names.
    Andy

  • Need help in understanding FA acquisition via Internal Order

    Hi Gurus
    I need your help in understanding the FA acquisition via Internal Order. The process we are following is that we create AUC using the AUC asset class and enter that AUC number in the settlement rule while creating IO. Once IO budget is approved, we release the IO. Once GR is completed, we do the Invoice receipt for the PO, followed by settlement of IO against the AUC. Afterwards, we create fixed asset in AS01 and enter this asset number in the settlement rule for the AUC in AIAB and settle the AUC to Fixed asset for the costs.
    My question is that during all this process, I don't see the PO information in the AUC record. When I display AUC, under the Enviornment tab, I see the Purchase order link but when I click it, there is nothing in there. The reason could be that we are creating AUC seperatly and not from within the IO where it says create AUC. I am not sure what is the best way for the whole process.
    I would be thankful if you can guide me.
    Thanks,
    Shalu

    Hi
    Can someone please help me with this issue?
    Thanks,
    Shalu

  • Need help to understand INSTALL command based on GPspec 2.1.1

    Hi Friends..
    Sorry before, i couldn't understand some commands based on GPSpec 2.1.1, i'm pretty new in this field..
    So, i need your help to help me understand about the commands..
    Firstly, i want to know what exactly the DATA that referred by INSTALL Command..
    here's outlined in the GPSpec 2.1.1 for INSTALL Command (Chapter 9)..
    CLA = '80' or '84'
    INS  = 'E6'              //INSTALL
    P1   = 'xx'               //Reference control parameter P1
    P2   = '00'               //Reference control parameter P2
    Lc   = 'xx'                //Length of data field
    Data 'xxxx…'           //Install data (and MAC if present)
    Le   = '00'What exactly the data referred?.
    I thought the data which referred can be Applets, Packages, CAP files..
    So, how to know the LC and Sequence bytes of the Applets/ Packages/ CAP files..
    i found this LINK..
    In the website i read an example of INSTALL Command..
    Here's an example mentioned in the link above..
    INSTALL FOR LOAD
      84 E6 02 00 2B 10 A0 00 00 00 18 50 00 00 00 00 00 00 52 41 44 50 00 00 0E EF 0C C6 02 00 00 C8 02 00 00 C7 02 00 00 00 2A 8B 3A 01 3C 8E FD A4 (00)
      00, 90 00  [Normal ending of the command.] Sorry, i'm still not understand about it..
    Please help me regarding this..
    Sorry, perhaps this question sounds silly..
    Thanks in advance..

    Hi,
    As a hint, you want to use INSTALL for install and make selectable. You can have a look at the APDU that GPShell sends through for the final install command (hint: it will be a part of the load command). You can also execute install_for_load, load and install_for_install in GPShell to see these commands. The GP card spec is a little confusing for the INSTALL command so tracing GPShell may help you understand it.
    Cheers,
    Shane

  • Need help on understanding COLUMN_IID, COLUMN_ID and ROW_IID

    Dear All,
    I need some help in understanding the below three things.
    COLUMN_IID, COLUMN_ID and ROW_IID
    First let me write down the requirement :
    I need to keep track of the scores on various status change.
    In the design of the template, we have something called 'Company Objectives' and 'Team Objectives' and 'Individual Objectives'.
    And under every heading, there are some objectives and a score beside it.
    When the document is with the employee, then he/she decides the score (0-Not started and 5-Completed). And when the employee submits the document then it goes to manager. The manager may change the score against an objective.
    Now, the requirement is with every change of status and substatus, i need to take a note of the score. Is this value stored in any standard table. I checked the table HRHAP_FURTHER but i cannot
    When i check the Function Modules 'HRHAP_DOCUMENT_GET_DETAIL' and 'HRHAP_DOC_FURTHER_READ', then i see those values but against various ROW_IID and COLUMN_IID and COLUMN_ID. I need to know how to catch the ROW_IID and COLUMN_IDD and COLUMN_ID for a particular objective. And what is the concept of the ROW_IID and COLUMN_IID and COLUMN_ID.
    Please let me know if something is not very clear. I will try to give some more explaination.

    Hi,
    For the context, in case it was not clear in the original message, we are talking Performance Management.
    As you know the documents are based on appraisal templates. On document create this template is read and the different elements are generated. As we can have the same element type/id multiple times in a template/document we need something to uniquely identify them. This is done via the ROW_IIID.
    Then for each element we can define which columns we use. A column is identified with COLUMN_ID, which i9s unique when we are on template configuration level. But on document level this is not the case. Due to the Part Appraiser columns (PAPP/PFGT) being multiplied by the number of part appraisers in the document the COLUMN_ID is not unique anymore. So we need to give them also a unique ID, which is the COLUMN_IID.
    Thats the short answer, I will write a longer document on it one of these days in my blog.
    Regards and Groetjes,
    Maurice Hagen

  • Need help to understand about cluster

    Hi,
    I am a SQL developer, and need a little help to understand what clusters are. I never came across to create any cluster. Can someone plz help me with simple example, what is cluster and in which situation we create cluster?
    Thanks
    Shantanu

    >
    I am a SQL developer, and need a little help to understand what clusters are. I never came across to create any cluster. Can someone plz help me with simple example, what is cluster and in which situation we create cluster?
    >
    A cluster is used to store data from more than table together. The data would typically share the primary key. For example the scott DEPT and EMP table data is related based on the DEPTNO value so DEPTNO could be the cluster key.
    If you created a cluster and then created copies of those tables in the cluster (e.g. myDEPT and myEMP) Oracle would store the dept and emp data for the same DEPTNO value together, even in the same block.
    Then when you query dept and emp data using DEPTNO it will be faster to retrieve the DEPT and EMP data for that department since it is colocated in the same blocks.
    The drawbacks are that when you only want data from one of the tables (e.g. emp) Oracle has to skip over the DEPT data since some of it will be in the same blocks that the EMP data is in.
    So clusters and clustered tables are most useful when you always query multiple cluster tables using the cluster key. For some other operations they can be very inefficient.
    See the Admin Guide for an example using the DEPT and EMP tables
    http://docs.oracle.com/cd/E11882_01/server.112/e25494/clustrs003.htm#ADMIN11747

  • Error Posting IDOC: need help in understanding the following error

    Hi ALL
    Can you please, help me understand the following error encountered while the message was trying to post a IDOC.
    where SAP_050 is the RFC destination created to post IDOCs
    <?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
    - <!--  Call Adapter
      -->
    - <SAP:Error xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/" SOAP:mustUnderstand="">
      <SAP:Category>XIAdapter</SAP:Category>
      <SAP:Code area="IDOC_ADAPTER">ATTRIBUTE_IDOC_RUNTIME</SAP:Code>
      <SAP:P1>FM NLS_GET_LANGU_CP_TAB: Could not determine code page with SAP_050 Operation successfully executed FM NLS_GET_LANGU_CP_TAB</SAP:P1>
      <SAP:P2 />
      <SAP:P3 />
      <SAP:P4 />
      <SAP:AdditionalText />
      <SAP:ApplicationFaultMessage namespace="" />
      <SAP:Stack>Error: FM NLS_GET_LANGU_CP_TAB: Could not determine code page with SAP_050 Operation successfully executed FM NLS_GET_LANGU_CP_TAB</SAP:Stack>
      <SAP:Retry>M</SAP:Retry>
      </SAP:Error>
    Your help is greatly appreciated.............Thank you!

    Hi Patrick,
      Check the authorizations assigned to the user which you used in the RFC destinations, If there is no enough authorizations then it is not possible to post the idocs.
    Also Refer this Note 747322
    Regards,
    Prakash

  • E-Rows = NULL and A-Rows=42M? Need help in understanding why.

    Hi,
    Oracle Standard Edition 11.2.0.3.0 CPU Oct 2012 running on Windows 2008 R2 x64. I am using Oracle 10g syntax for WITH clause as the query will also run on Oracle 10gR2. I do not have a Oracle 10gR2 environment at hand to comment if this behaves the same.
    Following query is beyond me. It takes around 2 minutes to return the "computed" result set of 66 rows.
    SQL> WITH dat AS
      2          (SELECT 723677 vid,
      3                  243668 fid,
      4                  TO_DATE ('06.03.2013', 'dd.mm.yyyy') mindt,
      5                  TO_DATE ('06.03.2013', 'dd.mm.yyyy') maxdt
      6             FROM DUAL
      7           UNION ALL
      8           SELECT 721850,
      9                  243668,
    10                  TO_DATE ('06.02.2013', 'dd.mm.yyyy'),
    11                  TO_DATE (' 22.03.2013', 'dd.mm.yyyy')
    12             FROM DUAL
    13           UNION ALL
    14           SELECT 723738,
    15                  243668,
    16                  TO_DATE ('16.03.2013', 'dd.mm.yyyy'),
    17                  TO_DATE ('  04.04.2013', 'dd.mm.yyyy')
    18             FROM DUAL)
    19      SELECT /*+ GATHER_PLAN_STATISTICS */ DISTINCT vid, fid, mindt - 1 + LEVEL dtshow
    20        FROM dat
    21  CONNECT BY LEVEL <= maxdt - mindt + 1
    22  order by fid, vid, dtshow;
    66 rows selected.
    SQL>
    SQL> SELECT * FROM TABLE (DBMS_XPLAN.display_cursor (NULL, NULL, 'ALLSTATS LAST'));
    PLAN_TABLE_OUTPUT
    SQL_ID  9c4vma4mds6zk, child number 0
    WITH dat AS         (SELECT 723677 vid,                 243668 fid,
                TO_DATE ('06.03.2013', 'dd.mm.yyyy') mindt,
    TO_DATE ('06.03.2013', 'dd.mm.yyyy') maxdt            FROM DUAL
    UNION ALL          SELECT 721850,                 243668,
       TO_DATE ('06.02.2013', 'dd.mm.yyyy'),                 TO_DATE ('
    22.03.2013', 'dd.mm.yyyy')            FROM DUAL          UNION ALL
        SELECT 723738,                 243668,                 TO_DATE
    ('16.03.2013', 'dd.mm.yyyy'),                 TO_DATE ('  04.04.2013',
    'dd.mm.yyyy')            FROM DUAL)     SELECT /*+
    GATHER_PLAN_STATISTICS */ DISTINCT vid, fid, mindt - 1 + LEVEL dtshow
        FROM dat CONNECT BY LEVEL <= maxdt - mindt + 1 order by fid, vid,
    dtshow
    Plan hash value: 1865145249
    | Id  | Operation                              | Name | Starts | E-Rows | A-Rows |   A-Time   |  OMem |  1Mem | Used-Mem |
    |   0 | SELECT STATEMENT                       |      |      1 |        |     66 |00:01:54.64 |       |       |          |
    |   1 |  SORT UNIQUE                           |      |      1 |      3 |     66 |00:01:54.64 |  6144 |  6144 | 6144  (0)|
    |   2 |   CONNECT BY WITHOUT FILTERING (UNIQUE)|      |      1 |        |     42M|00:01:04.00 |       |       |          |
    |   3 |    VIEW                                |      |      1 |      3 |      3 |00:00:00.01 |       |       |          |
    |   4 |     UNION-ALL                          |      |      1 |        |      3 |00:00:00.01 |       |       |          |
    |   5 |      FAST DUAL                         |      |      1 |      1 |      1 |00:00:00.01 |       |       |          |
    |   6 |      FAST DUAL                         |      |      1 |      1 |      1 |00:00:00.01 |       |       |          |
    |   7 |      FAST DUAL                         |      |      1 |      1 |      1 |00:00:00.01 |       |       |          |
    --------------------------------------------------------------------------------------------------------------------------If I take out one of the UNION queries, the query returns in under 1 second.
    SQL> WITH dat AS
      2          (SELECT 723677 vid,
      3                  243668 fid,
      4                  TO_DATE ('06.03.2013', 'dd.mm.yyyy') mindt,
      5                  TO_DATE ('06.03.2013', 'dd.mm.yyyy') maxdt
      6             FROM DUAL
      7           UNION ALL
      8           SELECT 721850,
      9                  243668,
    10                  TO_DATE ('06.02.2013', 'dd.mm.yyyy'),
    11                  TO_DATE (' 22.03.2013', 'dd.mm.yyyy')
    12             FROM DUAL)
    13      SELECT /*+ GATHER_PLAN_STATISTICS */ DISTINCT vid, fid, mindt - 1 + LEVEL dtshow
    14        FROM dat
    15  CONNECT BY LEVEL <= maxdt - mindt + 1
    16  order by fid, vid, dtshow;
    46 rows selected.
    SQL>
    SQL> SELECT * FROM TABLE (DBMS_XPLAN.display_cursor (NULL, NULL, 'ALLSTATS LAST'));
    PLAN_TABLE_OUTPUT
    SQL_ID  1d2f62uy0521p, child number 0
    WITH dat AS         (SELECT 723677 vid,                 243668 fid,
                TO_DATE ('06.03.2013', 'dd.mm.yyyy') mindt,
    TO_DATE ('06.03.2013', 'dd.mm.yyyy') maxdt            FROM DUAL
    UNION ALL          SELECT 721850,                 243668,
       TO_DATE ('06.02.2013', 'dd.mm.yyyy'),                 TO_DATE ('
    22.03.2013', 'dd.mm.yyyy')            FROM DUAL)     SELECT /*+
    GATHER_PLAN_STATISTICS */ DISTINCT vid, fid, mindt - 1 + LEVEL dtshow
        FROM dat CONNECT BY LEVEL <= maxdt - mindt + 1 order by fid, vid,
    dtshow
    Plan hash value: 2232696677
    | Id  | Operation                              | Name | Starts | E-Rows | A-Rows |   A-Time   |  OMem |  1Mem | Used-Mem |
    |   0 | SELECT STATEMENT                       |      |      1 |        |     46 |00:00:00.01 |       |       |          |
    |   1 |  SORT UNIQUE                           |      |      1 |      2 |     46 |00:00:00.01 |  4096 |  4096 | 4096  (0)|
    |   2 |   CONNECT BY WITHOUT FILTERING (UNIQUE)|      |      1 |        |     90 |00:00:00.01 |       |       |          |
    |   3 |    VIEW                                |      |      1 |      2 |      2 |00:00:00.01 |       |       |          |
    |   4 |     UNION-ALL                          |      |      1 |        |      2 |00:00:00.01 |       |       |          |
    |   5 |      FAST DUAL                         |      |      1 |      1 |      1 |00:00:00.01 |       |       |          |
    |   6 |      FAST DUAL                         |      |      1 |      1 |      1 |00:00:00.01 |       |       |          |
    26 rows selected.What I cannot understand is why the E-Rows is NULL for "CONNECT BY WITHOUT FILTERING (UNIQUE)" step and A-Rows shoots up to 42M for first case. The behaviour is the same for any number of UNION queries above two.
    Can anyone please help me understand this and aid in tuning this accordingly? Also, I would be happy to know if there are better ways to generate the missing date range.
    Regards,
    Satish

    May be, this?
    WITH dat AS
                (SELECT 723677 vid,
                        243668 fid,
                        TO_DATE ('06.03.2013', 'dd.mm.yyyy') mindt,
                        TO_DATE ('06.03.2013', 'dd.mm.yyyy') maxdt
                   FROM DUAL
                 UNION ALL
                 SELECT 721850,
                        243668,
                       TO_DATE ('06.02.2013', 'dd.mm.yyyy'),
                       TO_DATE (' 22.03.2013', 'dd.mm.yyyy')
                  FROM DUAL
                UNION ALL
                SELECT 723738,
                       243668,
                       TO_DATE ('16.03.2013', 'dd.mm.yyyy'),
                       TO_DATE ('  04.04.2013', 'dd.mm.yyyy')
                  FROM DUAL)
           SELECT  vid, fid, mindt - 1 + LEVEL dtshow
             FROM dat
      CONNECT BY LEVEL <= maxdt - mindt + 1
          and prior vid = vid
          and prior fid = fid
          and prior sys_guid() is not null
      order by fid, vid, dtshow;
    66 rows selected.
    Elapsed: 00:00:00.03

  • Need help in understanding why so many gets and I/O

    Hi there,
    I have a sql file somewhat similar in structure to below:
    delete from table emp;-- changed to Truncate table emp;
    delete from table dept;--changed to Truncate table dept;
    insert into emp values() select a,b,c from temp_emp,temp_dept where temp_emp.id=temp_dept.emp_id
    update emp set emp_name=(select emp_name from dept where emp.id=dept.emp_id);
    commit --only at the end
    the above file takes about 9-10 hrs to complete its operation. and
    the values from v$sql for the statement
    update emp set emp_name=(select emp_name from dept where emp.id=dept.emp_id);
    are as below:
    SHARABLE_MEM     PERSISTENT_MEM     RUNTIME_MEM     SORTS     LOADED_VERSIONS     OPEN_VERSIONS     USERS_OPENING     FETCHES     EXECUTIONS     PX_SERVERS_EXECUTIONS     END_OF_FETCH_COUNT     USERS_EXECUTING     LOADS     FIRST_LOAD_TIME     INVALIDATIONS     PARSE_CALLS     DISK_READS     DIRECT_WRITES     BUFFER_GETS     APPLICATION_WAIT_TIME     CONCURRENCY_WAIT_TIME     CLUSTER_WAIT_TIME     USER_IO_WAIT_TIME     PLSQL_EXEC_TIME     JAVA_EXEC_TIME     ROWS_PROCESSED     COMMAND_TYPE     OPTIMIZER_MODE     OPTIMIZER_COST     OPTIMIZER_ENV     OPTIMIZER_ENV_HASH_VALUE     PARSING_USER_ID     PARSING_SCHEMA_ID     PARSING_SCHEMA_NAME     KEPT_VERSIONS     ADDRESS     TYPE_CHK_HEAP     HASH_VALUE     OLD_HASH_VALUE     PLAN_HASH_VALUE     CHILD_NUMBER     SERVICE     SERVICE_HASH     MODULE     MODULE_HASH     ACTION     ACTION_HASH     SERIALIZABLE_ABORTS     OUTLINE_CATEGORY     CPU_TIME     ELAPSED_TIME     OUTLINE_SID     CHILD_ADDRESS     SQLTYPE     REMOTE     OBJECT_STATUS     LITERAL_HASH_VALUE     LAST_LOAD_TIME     IS_OBSOLETE     CHILD_LATCH     SQL_PROFILE     PROGRAM_ID     PROGRAM_LINE#     EXACT_MATCHING_SIGNATURE     FORCE_MATCHING_SIGNATURE     LAST_ACTIVE_TIME     BIND_DATA     TYPECHECK_MEM
    18965     8760     7880     0     1     0     0     0     2     0     2     0     2     2011-05-10/21:16:44     1     2     163270378     0     164295929     0     509739     0     3215857850     0     0     20142     6     ALL_ROWS     656     E289FB89A4E49800CE001000AEF9E3E2CFFA331056414155519421105555551545555558591555449665851D5511058555155511152552455580588055A1454A8E0950402000002000000000010000100050000002002080007D000000000002C06566001010000080830F400000E032330000000001404A8E09504646262040262320030020003020A000A5A000     4279923421     50     50     APPS     0     00000003CBE5EF50     00     1866523305     816672812     1937724149     0     SYS$USERS     0     01@</my.sql     -2038272289          -265190056     0          9468268067     10420092918          00000003E8593000     6     N     VALID     0     2011-05-11/10:23:45     N     5          0     0     1.57848E+19     1.57848E+19     5/12/2011 4:39          0
    1) how do i re-write this legacy script? and what should be done to improve performance?
    2) Should i use PL/sql to re-write it?
    3) Also help in understanding why a simple update statement is doing so many buffer gets and reading , Is this Read consistency Trap as i'm not committing anywhere in between or it is actually doing so much of work.
    (assume dept table has cols emp_name and emp_id also)

    update emp set emp_name=(select emp_name from dept where emp.id=dept.emp_id);I guess that these are masked table names ? Nobody would have emp_name in a dept table.
    Can you re-format the output, using "code" tags ( [  or {  }
    Hemant K Chitale
    Edited by: Hemant K Chitale on May 12, 2011 12:44 PM

  • Duplicate partitions, need help to understand and fix.

    So I was looking for a USB that I plugged in and went into /media/usbhd-sdb4/miles and realized all its contents was from my home directory.
    So I created a random file in my home directory to see if it would also update /media/usbhd-sdb4/miles , and it did.
    Can someone help me understand what is happening?
    Also if I can fuse sdb4 and sdb2 as one, and partition it as my home directory without losing its contents?
    Below is some information that I think would be helpful.
    Thank you.
    [miles]> cd /media/usbhd-sdb4/
    [usbhd-sdb4]> ls -l
    total 20
    drwx------ 2 root root 16384 May 28 14:16 lost+found
    drwx------ 76 miles users 4096 Oct 15 00:42 miles
    [usbhd-sdb4]> cd miles
    [miles]> pwd
    /media/usbhd-sdb4/miles
    [miles]>
    lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 465.8G 0 disk
    ├─sda1 8:1 0 19.5G 0 part
    └─sda2 8:2 0 446.2G 0 part
    sdb 8:16 0 465.8G 0 disk
    ├─sdb1 8:17 0 102M 0 part /media/usbhd-sdb1
    ├─sdb2 8:18 0 258.9M 0 part [SWAP]
    ├─sdb3 8:19 0 14.7G 0 part /
    └─sdb4 8:20 0 450.8G 0 part /media/usbhd-sdb4
    sr0 11:0 1 1024M 0 rom

    Check your udev rules...

  • Failed to export to pdf - need help in understanding adobe advice

    my indesign document failed to export to pdf, therefore I looked up the indesign help - see below; I could not find where to proceed after point 2 / would anyone help?
    thank you
    Solution 2: Downsample the images after export using  Acrobat.
    Choose File > Export and select  PDF as your format, choose a location and click Save.
    In the Export PDF dialog box, disable the image downsampling  options.
    Open the resulting PDF in Acrobat.
    Choose Advanced > PDF Optimizer.
    Set the downsampling options you desire in the Image Settings pane.
    Disable any other options that are not needed.
    Click OK and choose the name and location to save the file.

    I'm going to assume you actually are unable to accomplish/understand step 2 as opposed to not knowing how to open the resulting pdf.
    See screen shot - use the pull down menus under compression for Color Images, Grayscale Images and Monochrome Images - Select Do Not Downsample. Although, I think the Help file has you chasing your tail.

  • Need help with understanding HD

    Just picked up a new HD video camera and it seems technology has played a trick on me.
    my Sony HD camcorder imorts the file as a .m2ts file
    When imported in Adobe Orgnizer Elements - preview looks skinny and "squished"
    It seems like the camcorder has a preference to video a whole bunch of stuff and then burn it to DVD in high def or normal resoluition - whereas I like to record by clips, organize them, and then add into Ppremier Elements as needed. So I really don't understand what to do or how to do it, to get the results i need?
    Do i need to manually convert each .m2ts file into an .avi?
    When I open a HD clip inpremier, the size is right, but preview is all blurry and jumpy?
    What is the best way for me to take one of my clips and have it ready to insert in a movie?
    Thanks - any help would be appreciated.

    This is aimed at Premiere Pro, but may help
    A link with many ideas about computer setup http://forums.adobe.com/thread/436215?tstart=0
    Work through all of the steps (ideas) listed at http://ppro.wikia.com/wiki/Troubleshooting
    If your problem isn't fixed after you follow all of the steps, report back with the DETAILS asked for in the FINALLY section, the questions at the end of the troubleshooting link... most especially the codec used... see Question 1

  • Need help to understand the relation bw VBKD and VBAP

    Hi all,
    I have a requiement to display the incoterms of ship-to-party if Ship-to-party is differnt from sold-to-party.
    At header level it is ok,,but at item level im facing some problems.
    READ TABLE xvbkd WITH KEY vbeln = xvbap-vbeln
                                        posnr = xvbap-posnr.
              IF sy-subrc = 0.
                xvbkd-inco1 = wa_xvbkd-inco1.
                xvbkd-inco2 = wa_xvbkd-inco2.
                MODIFY xvbkd FROM wa_xvbkd INDEX sy-tabix TRANSPORTING  inco1 inco2.
              ENDIF.
    Im getting the incotems but the problem is xvbkd does get populated with the items though VBAP has multiple items.
    Pls help me to understand why VBKD does nt get populated with item details.
    In some cases it gets populated, then it is working fine.
    Your valuable suggesstion is highly appricated.
    Rgs,
    Priya

    Pls help me to understand why VBKD does nt get populated with item details
    Hi,
    When you define a table lookup for the table VBKD (Sales document: Business data), data does not have to be present for every item in the sales document. If the item data is no different than the header data, the system does not store it in the item as well. In this case, you can find the valid values in the header data, which is stored in table VBKD under the item number "000000".
    Thus, if you want to read incoterms values from VBKD, you always need to define two table lookups:
    1: Table lookup in table VBKD using the key VBELN and POSNR;
    2: Table lookup in table VBKD using the key VBELN and POSNR ="000000" (if the 1st lookup failed)
    Regards,
    Andrea

Maybe you are looking for

  • Installing 9i (9.2.0.1) Database on Centos 4.5

    Hi guys, I'm 90% succeed on installing Oracle 9.2.0.1 on Centos 4.5. the problem is that I'm having a problem at the end of installation. I followed the installation from : http://www.oracle-base.com/articles/9i/Oracle9iInstallationOnRedHat9.php and

  • Safari/Mail certificate problem with gmail/google

    Here is my problem: I have set-up Mail to use my gmail account through POP. Since yesterday, when I try to get or send mail, mail gives me the error: Unable to verify SSL server pop.gmail.com Mail was unable to verify the identity of this server, whi

  • Can I use airport express in a wired network to play music through my stereo?

    I originally used my 1st gen AE to wirelessly stream music from iTunes on my MacPro to my home theater.  This works, but there are some good days and some bad days where there are dropouts in the music.  I have recently run Cat6 cable to my HT from m

  • On compilation in J2ME WTK, Midlet code generates clone() method

    I did try to compile a MIDlet which creates a user defined object in J2ME WTK. The compiled code has Object.clone() method invocation. However clone() method is not available in J2ME CLDC API. So the code results in java.lang.NoSuchMethodError and su

  • Costing in SAP B1

    Dear SAP Gurus, Can you please explain me how to confire the Costing part.  The client wants to capture the cost involved in transporation of goods from factory to customer place.