Split Table Procedure in Database Migration

Hello,
We are doing Database Migration from Informix to Oracle...
We have a Big Table PCL4 in our system which is around 100 GB..
I want to split the table and would like to know what is the procedure to do so...
What changes are to be done at database level , in STR Files , in TOC files and if any in .cmd files...
Thanks & Regards,
RaHuL...

RaHul,
In my last experience with Informix migration (HeSC), I didn't use split table, because the following caution:
CAUTION: This may not be possible on databases that need exclusive access on the table when creating an index (for example, Informix).
I used the following informix features:
1. We fragmented the BIG Table (In our case CDCLS) into four peices ( Four DBSPACES) and moved CDCLS~0 (Index) to others DBSPACE.
2. We increased CPU VPs and used the following Informix parameters:
PDQPRIORITY 100
PSORT_NPROCS 32
The results of the test was:  ~ 5hs50min less during export
Cheers,
Jairo

Similar Messages

  • Oracle Database Migration Verifier - Can it be used for diff table structur

    Hello,
    We have re-engineered the existing sybase tables to a new structure in Oracle for few of the tables.For example a table in sybase is normalized to two tables in oracle.In these cases can the "Oracle Database Migration Verifier" be mapped such that the columns in one table in sybase be mapped to two table is Oracle with their respective column names.
    In a gist can the tool be used even if the structure is not the same in the source and targer databases.
    Please let me know if you need more clarifications regarding my query.
    Regards,
    Ramanathan.K

    not really. The DMV was a simple tool for verifiying that what you now had in Oracle was what you had in Sybase. It does not do what you are expecting.
    B

  • Can i create table in a database procedure?

    Hello
    Can i create a table in a database procedure?
    create or replace procedure create_table is
    tt number;
    begin
    create table temp (
    a number,
    b varchar2(100));
    end;
    it return an error....
    regards,
    deemy

    Hello
    Can i create a table in a database procedure?
    create or replace procedure create_table is
    tt number;
    egin
    create table temp (
    a number,
    b varchar2(100));
    end;
    it return an error....
    regards,
    deemy
    create or replace procedure create_table is
    tt number;
    begin
       execute immediate 'create table temp (a number,b varchar2(100))';
    end;
    /Cheers
    Sarma.

  • Database migration query not executing in database

    We moved over our SQL database to another server. We have a few .cfm template files that have queries to get 'content' from our database. These queries in the .cfm template files work fine and execute properly. However, for some reason, since we moved over our database, the queries in the actual database don't execute/run anymore. In the old server we didn't have this problem. Our queries in the database would run properly then. The data would display correctly on the webpages.
    Is there some kind of properties setup that was overlooked when migrating over our database?
    The output we now see when viewing our pages is the actual query code.
    Example:
    SELECT lname
    FROM table
    This is actually displayed on the website and therefore the query is not executing/running. Our code was never changed. So why is it not outputting the same?
    Thanks for any help provided.

    Sorry for the confusion.
    The coldfusion templates and database queries both worked fine on the previous old server.
    Now since the migration onto the new server, only the coldfusion templates work fine, but not the queries in the database.
    So for example, a query in the coldfusion template will run fine, and execute the intial pull of data. But once the content is pulled from the database, the queries inside the database (including stored procedures) don't execute.
    We do get the expected results when we run our stored procedure in Management Studio and if we put the stored procedure on a standalone web page we do see the records. 
    We only see the sql code when we have the stored procedure inside of our w_content table of our database, and its being called by the first query in the template.  For some odd reason the stored procedure does not execute if it is inside the database.  
    Hopefully that clarifies our problem.
    Thank you for helping!

  • Regarding database migration and server hardware.

    Hi Experts,
    We have Oracel 10g database running on HP-UX 11i V1 on RP7400 server i.e. PARISC architecture. We are planning to do database migration from Oracle to MaxDB. But we are also planning to upgrade the server i.e. new server which is having ITANIUM architecture and also planning to upgrade the HP-UX operating system version from HP-UX 11i V1 to HP-UX 11i V3.
    Please let me know how to proceed with this migration?
    Thanks,
    Narendra

    > You just export the database (selecting maxdb that he copies the nessesary ddl files) and then install you new system.
    > After that you can use sapinst to install / load the sap system using the export.
    One may need to use table splitting (dowtime reduction), one may need special load arguments for R3load, SMIGR_CREATE_DDL etc. - so it's not as easy as just exporting and importing
    Markus

  • How to create a view consisting of data from tables in2 different databases

    Using Oracle 10.2g
    I have 2 databases Gus and haggis on Comqdhb schema.
    glink indicates a databse link between Haggis and Gus
    In Gus there are tables student,subject,grade,school containing columns like upn...
    STUDENT
    upn
    academicYear
    SUBJECT
    subject
    GRADE
    examlevel
    grade
    SCHOOL
    sn
    In HAGGIS there are tables student,grade,teacher containing columns upn...desc below.
    STUDENT
    upn
    GRADE
    grade
    upn
    academicyear
    level
    Create view in your HAGGIS database which will join all of the exam grades together. You should have one view which will produce the following relation :
    examGrade(upn, subject, examlevel, sn, grade,academicYear)
    so I need to create a view which gets the data from both the tables in both the databases.
    create view as examGrade(upn, subject, examlevel, sn, grade,academicYear) as select s.upn
    But i am not getting how to select a column from 2 tables in different databases
    I mean if i said
    select upn from comqdhb.student@glink,comqdhb.student;
    select upn from comqdhb.student@glink,comqdhb.student
    ERROR at line 1:
    ORA-00918: column ambiguously defined
    help me out,Thank you.

    Thank you for the reply will follow up the code format
    Create views in your HAGGIS schema database which will join all of the exam grades together. You should have one view which will produce the following relation :
    examGrade(upn, subject, examlevel, sn, grade,academicYear)
    I understand that there wont be duplication when we use conditions
    If i query
    select count(upn)
    from   comqdhb.student@glink I get 9000
    but after the union
    create view examGrade(upn, subject, examlevel, sn, grade,academicYear)
    as
    select distinct s.upn as upn
    ,                  g.subject as subject
    ,                  g."LEVEL" as examlevel
    ,                  g.grade as grades
    ,                  '9364097'
    ,                  to_number(g.academicyear) as academicyear
    from             comqdhb.student s
    ,                   comqdhb.grade g
    where           s.upn=g.upn
    union
    select            s.upn
    ,                   sb.subject
    ,                   g.elevel
    ,                   g.grade
    ,                   s.acyr
    ,                   sc.sn
    from              comqdhb.subject@glink sb
    ,                   comqdhb.student@glink s
    ,                    comqdhb.gradevalues@glink g
    ,                    comqdhb.school@glink sc,
    ,                    comqdhb.studentingroup@glink sg
    ,                    comqdhb.teachinggroup@glink tg
    where            sb.sid=tg.sid
    and                tg.gid=sg.gid
    and                sg.upn=s.upn
    and                g."LEVEL"=tg.elevel
    and                s.school=sc.id
    and                sc.id=tg.id; returns
    count(upn) from exam gradeIt gets stuck actually sometimes it returns
    932002 some results.
    2:
    Another problem i am having which i am trying to solve and written up my ideas but haven't been getting the expected results.Hope you can help.Thank you.
    Information:
    =======
    All children take exams at the age of 16 called a General Certificate of SecondaryEducation (GCSE).
    They have to study and take exams in Mathematics, English and Science, and can take other subjects such as History, French, Art etc. Most students will study between 5 and 10 different subjects before taking their GCSEs.
    For each exam, a student is awarded a grade from A*, A, B,C,D,E,F,G,U,X An A* grade is the best grade achievable and an X is the worst grade.
    In order to analyze how students have performed, each grade is mapped to a numeric value as follows:
    Grade Numerical score
    A* 8
    A 7
    B 6
    C 5
    D 4
    E 3
    F 2
    G 1
    U 0
    X 0
    Now why i need this avgGCSE is because i have to create a view containing avgGCSE of the students it is used in the next question where a condition is avgGCSE is between 6.5 and 7
    In order to calculate the avgGCSE the idea is to calculate the grades of the students and map the grades to their corresponding scores/values
    add them all up and div by the total no of grades to get the avg.
    desc comqdhb.STUDENT@glink;
    STUDENT
    =======
    UPN
    FNAME
    CNAME
    DOB
    GENDER
    PREVIOUSSCHOOL
    XGCSE
    SCHOOL
    ACYR
    STUDENTINGROUP
    =============
    UPN
    GID
    STARTDATE
    ENDDATE
    GRADE
    GRADEVALUES
    ===========
    GRADE
    LEVEL
    VALUE
    I have a opinion that xgcse in STUDENT table refers to the avgGCSE which i want to calculate as when i asked my professor as to what xgcse he said that he forgot to take it out of the table and it is not necessary while creating avggcse.
    select *
    from comqdhb.student@glink
    where xgcse<6.5; Displaying a result
    returns:
    UPN FAMILYNAME COMMONNAME DATEOFBIR GENDER PREVIOUSSCHOOL XGCSE SCHOOL ACYR
    ===========================================================================
    1011 KIMBERLY ABBOT 07-JUL-79 f none 3.93500948 2 2
    select *
    from comqdhb.student@glink
    where xgcse between 6.5 and 7 and upn = 1386; Displaying a result
    returns:
    UPN FAMILYNAME COMMONNAME DATEOFBIR GENDER PREVIOUSSCHOOL XGCSE SCHOOL ACYR
    ===========================================================================
    1386 STEPHANIE AANNESSON 15-JAN-79 f none 6.88873 2 2 so if xgcse is the avgGCSE then upn 1011 has avggcse<6.5 and 1386 has avggcse >6.5
    my idea was backward strategy like so now if we find out upn 1368 has suppose xgcse(avggcse)>6.5 how to extract the avggcse for the particular upn We need to map grades from GRADEVALUES to grade in STUDENTINGROUP and map upn from studentingroup to upn in student to output the values for the corresponding grades from GRADEVALUES
    select grade
    from comqdhb.studentingroup@glink
    where upn = 1011;
    Result:
    GRADE
    =====
    D
    F
    B
    E
    C
    E
    E
    B
    8 rows selected. Mapping each grade to the corresponding value and calculating we get
    32/8=4 total(values to corresponding grades)/no of grades.
    But the xgcse for upn 1011 is 3.935 and i am getting 4!! maybe xgcse isn't avggrade but ? is the procedure by me correct for calculating avggcse
    select grade
    from comqdhb.studentingroup@glink
    where upn = 1386;
    Result:
    GRADE
    ======
    A*
    A*
    A*
    A*
    B
    A*
    A*
    A
    B
    B
    B
    11 rows selected. grade to the corresponding value and calculating we get
    79/11=7.12 total(values to corresponding grades)/no of grades.
    But the xgcse for upn 1011 is 6.88... and i am getting 7.12!!
    But another problem
    when i say
    select   g.value,g.grade
    from     comqdhb.gradevalues@glink g
    ,        comqdhb.studentingroup@glink sg
    where    g.grade=sg.grade
    and      sg.upn=1011;
    result:
    ======
    VALUE GRADE
    ===========
      100 B
      100 B
       80 C
       60 D
       40 E
       40 E
       40 E
       20 F
        6 B
        6 B
        5 C
    VALUE GRADE
    =============
        4 D
        3 E
        3 E
        3 E
        2 F
    16 rows selected.
    select   distinct g.value,g.grade
    from     comqdhb.gradevalues@glink g
    ,        comqdhb.studentingroup@glink sg
    where    g.grade=sg.grade
    and      sg.upn=1011;
    result:
    ======
    VALUE GRADE
    ============
         2 F
       100 B
         6 B
         3 E
        60 D
         5 C
         4 D
        80 C
        40 E
        20 F
    10 rows selected. I am getting only 8 for the query
    select grade
    from comqdhb.studentingroup@glink
    where upn = 1386; here its becomming 10 and also its displaying values as 100 and ...
    select distinct *
    from   comqdhb.gradevalues@glink;
    GRADEVALUES
    ===========
    LEVEL      GRADE           VALUE
    ================================
    a          A                 120
    a          B                 100
    a          C                  80
    a          D                  60
    a          E                  40
    a          F                  20
    a          U                   0
    a          X                   0
    g          A                   7
    g          A*                  8
    g          B                   6
    LEVEL      GRADE           VALUE
    ================================
    g          C                   5
    g          D                   4
    g          E                   3
    g          F                   2
    g          G                   1
    g          U                   0
    g          X                   0
    18 rows selected. I was hoping if i could map the grades and get the values and calculate avggrade by total(values)/count(values)that would be it but here there are values like 100...
    select  sum(g.value)/count(g.grade) as avggrade
    from    comqdhb.gradevalues@glink g
    ,         comqdhb.studentingroup@glink sg
    where  g.grade=sg.grade
    and     sg.upn=1386;
    avggrade
    ========
    37.4375 the avggrade cant be this big and when i map each grade i obtained for 1368 like a to 7+b to 6 so on i get avggrade 7.12
    kindly help.
    Edited by: Trooper on Dec 15, 2008 4:49 AM

  • Error while importing tables from oracle database

    Hi
    I am getting the following error when i am trying to import table from oracle database.
    my operating system is windows and my database is oracle.
    [nQSError: 16001]ODBC error state: IM004 code:0 message:
    [Microsoft][ODBC Driver Manager] Driver`s SQLAllocHandle on SQL_HANDLE_ENV failed.
    please help me in resolving this issue.
    Thanks and Regards,
    Raj

    Hi Madan,
    I have done migration Discoverer Admin EUL Layer into OBIEE repository using below methodology.
    Navigate to the <installdrive>\OracleBI\server\Bin directory. There are two important files in this directory: the migration assistant executable file named MigrateEUL.exe and a properties configuration file named MigrationConfig.properties.
    Could you please help me how to migrate discoverer plus workbooks and worksheets into OBIEE Answers?
    go through below link, It will show navigation steps for migrating of EUL from Discoverer to OBIEE.But i need migration of workbooks and worksheets from Discoverer into OBIEE Answers.
    http://www.oracle.com/technology/obe/obe_bi/discoverer/discoverer_1012/discomigration/migrate_disco_biee.htm
    This is very great full help to me …
    Advance thanks for your suggestions.
    Regards
    Duraga Prasad.

  • Slow split table export (R3load and WHERE clause)

    For our split table exports, we used custom coded WHERE clauses. (Basically adding additional columns to the R3ta default column to take advantage of existing indexes).
    The results have been good so far. Full tablescans have been eliminated and export times have gone down, in some cases, tables export times have improved by 50%.
    However, our biggest table, CE1OC01 (120 GB), continues to be a bottleneck. Initially, after using the new WHERE clause, it looked like performance gains were dramatic, with export times for the first 5 packages dropping from 25-30 hours down to 1 1/2 hours.
    However, after 2 hours, the remaining CE1OC01 split packages have shown no improvement. This is very odd because we are trying to determine why part of the table exports very fast, but other parts are running very slow.
    Before the custom WHERE clauses, the export server had run into issues with SORTHEAP being exhausted, so we thought that might be the culprit. But that does not seem to be an issue now, since the improved WHERE clauses have reduced or eliminated excessive sorting.
    I checked the access path of all the CE1OC01 packages, through EXPLAIN, and they all access the same index to return results. The execution time in EXPLAIN returns similar times for each of the packages:
    CE1OC01-11: select * from CE1OC01  WHERE MANDT='212'
    AND ("BELNR" > '0124727994') AND ("BELNR" <= '0131810250')
    CE1OC01-19: select * from CE1OC01 WHERE MANDT='212'
    AND ("BELNR" > '0181387534') AND ("BELNR" <= '0188469413')
          0 SELECT STATEMENT ( Estimated Costs =  8.448E+06 [timerons] )
      |
      ---      1 RETURN
          |
          ---      2 FETCH CE1OC01
              |
              ------   3 IXSCAN CE1OC01~4 #key columns:  2
    query execution time [millisec]            |       333
    uow elapsed time [microsec]                |   429,907
    total user CPU time [microsec]             |         0
    total system cpu time [microsec]           |         0
    Both queries utilize an index that has fields MANDT and BELNR. However, during R3load, CE1OC01-19 finishes in an hour and a half, whereas CE1OC01-11 can take 25-30 hours.
    I am wondering if there is anything else to check on the DB2 access path side of things or if I need to start digging deeper into other aggregate load/infrastructure issues. Other tables don't seem to exhibit this behavior. There is some discrepancy between other tables' run times (for example, 2-4 hours), but those are not as dramatic as this particular table.
    Another idea to test is to try and export only 5 parts of the table at a time, perhaps there is a throughput or logical limitation when all 20 of the exports are running at the same time. Or create a single column index on BELNR (default R3ta column) and see if that shows any improvement.
    Anyone have any ideas on why some of the table moves fast but the rest of it moves slow?
    We also notice that the "fast" parts of the table are at the very end of the table. We are wondering if perhaps the index is less fragmented in that range, a REORG or recreation of the index may do this table some good. We were hoping to squeeze as many improvements out of our export process as possible before running a full REORG on the database. This particular index (there are 5 indexes on this table) has a Cluster Ratio of 54%, so, perhaps for purposes of the export, it may make sense to REORG the table and cluster it around this particular index. By contrast, the primary key index has a Cluster Ratio of 86%.
    Here is the output from our current run. The "slow" parts of the table have not completed, but they average a throughput of 0.18 MB/min, versus the "fast" parts, which average 5 MB/min, a pretty dramatic difference.
    package     time      start date        end date          size MB  MB/min
    CE1OC01-16  10:20:37  2008-11-25 20:47  2008-11-26 07:08   417.62    0.67
    CE1OC01-18   1:26:58  2008-11-25 20:47  2008-11-25 22:14   429.41    4.94
    CE1OC01-17   1:26:04  2008-11-25 20:47  2008-11-25 22:13   416.38    4.84
    CE1OC01-19   1:24:46  2008-11-25 20:47  2008-11-25 22:12   437.98    5.17
    CE1OC01-20   1:20:51  2008-11-25 20:48  2008-11-25 22:09   435.87    5.39
    CE1OC01-1    0:00:00  2008-11-25 20:48                       0.00
    CE1OC01-10   0:00:00  2008-11-25 20:48                     152.25
    CE1OC01-11   0:00:00  2008-11-25 20:48                     143.55
    CE1OC01-12   0:00:00  2008-11-25 20:48                     145.11
    CE1OC01-13   0:00:00  2008-11-25 20:48                     146.92
    CE1OC01-14   0:00:00  2008-11-25 20:48                     140.00
    CE1OC01-15   0:00:00  2008-11-25 20:48                     145.52
    CE1OC01-2    0:00:00  2008-11-25 20:48                     184.33
    CE1OC01-3    0:00:00  2008-11-25 20:48                     183.34
    CE1OC01-4    0:00:00  2008-11-25 20:48                     158.62
    CE1OC01-5    0:00:00  2008-11-25 20:48                     157.09
    CE1OC01-6    0:00:00  2008-11-25 20:48                     150.41
    CE1OC01-7    0:00:00  2008-11-25 20:48                     175.29
    CE1OC01-8    0:00:00  2008-11-25 20:48                     150.55
    CE1OC01-9    0:00:00  2008-11-25 20:48                     154.84

    Hi all, thanks for the quick and extremely helpful answers.
    Beck,
    Thanks for the health check. We are exporting the entire table in parallel, so all the exports begin at the same time. Regarding the SORTHEAP, we initially thought that might be our problem, because we were running out of SORTHEAP on the source database server. Looks like for this run, and the previous run, SORTHEAP has remained available and has not overrun. That's what was so confusing, because this looked like a buffer overrun.
    Ralph,
    The WHERE technique you provided worked perfectly. Our export times have improved dramatically by switching to the forced full tablescan. Being always trained to eliminate full tablescans, it seems counterintuitive at first, but, given the nature of the export query, combined with the unsorted export, it now makes total sense why the tablescan works so much better.
    Looks like you were right, in this case, the index adds too much additional overhead, and especially since our Cluster Ratio was terrible (in the 50% range), so the index was definitely working against us, by bouncing all over the place to pull the data out.
    We're going to look at some of our other long running tables and see if this technique improves runtimes on them as well.
    Thanks so much, that helped us out tremendously. We will verify the data from source to target matches up 1 for 1 by running a consistency check.
    Look at the throughput difference between the previous run and the current run:
    package     time       start date        end date          size MB  MB/min
    CE1OC01-11   40:14:47  2008-11-20 19:43  2008-11-22 11:58   437.27    0.18
    CE1OC01-14   39:59:51  2008-11-20 19:43  2008-11-22 11:43   427.60    0.18
    CE1OC01-12   39:58:37  2008-11-20 19:43  2008-11-22 11:42   430.66    0.18
    CE1OC01-13   39:51:27  2008-11-20 19:43  2008-11-22 11:35   421.09    0.18
    CE1OC01-15   39:49:50  2008-11-20 19:43  2008-11-22 11:33   426.54    0.18
    CE1OC01-10   39:33:57  2008-11-20 19:43  2008-11-22 11:17   429.44    0.18
    CE1OC01-8    39:27:58  2008-11-20 19:43  2008-11-22 11:11   417.62    0.18
    CE1OC01-6    39:02:18  2008-11-20 19:43  2008-11-22 10:45   416.35    0.18
    CE1OC01-5    38:53:09  2008-11-20 19:43  2008-11-22 10:36   413.29    0.18
    CE1OC01-4    38:52:34  2008-11-20 19:43  2008-11-22 10:36   424.06    0.18
    CE1OC01-9    38:48:09  2008-11-20 19:43  2008-11-22 10:31   416.89    0.18
    CE1OC01-3    38:21:51  2008-11-20 19:43  2008-11-22 10:05   428.16    0.19
    CE1OC01-2    36:02:27  2008-11-20 19:43  2008-11-22 07:46   409.05    0.19
    CE1OC01-7    33:35:42  2008-11-20 19:43  2008-11-22 05:19   414.24    0.21
    CE1OC01-16    9:33:14  2008-11-20 19:43  2008-11-21 05:16   417.62    0.73
    CE1OC01-17    1:20:01  2008-11-20 19:43  2008-11-20 21:03   416.38    5.20
    CE1OC01-18    1:19:29  2008-11-20 19:43  2008-11-20 21:03   429.41    5.40
    CE1OC01-19    1:16:13  2008-11-20 19:44  2008-11-20 21:00   437.98    5.75
    CE1OC01-20    1:14:06  2008-11-20 19:49  2008-11-20 21:03   435.87    5.88
    PLPO          0:52:14  2008-11-20 19:43  2008-11-20 20:35    92.70    1.77
    BCST_SR       0:05:12  2008-11-20 19:43  2008-11-20 19:48    29.39    5.65
    CE1OC01-1     0:00:00  2008-11-20 19:43                       0.00
                558:13:06  2008-11-20 19:43  2008-11-22 11:58  8171.62
    package     time      start date        end date          size MB   MB/min
    CE1OC01-9    9:11:58  2008-12-01 20:14  2008-12-02 05:26   1172.12    2.12
    CE1OC01-5    9:11:48  2008-12-01 20:14  2008-12-02 05:25   1174.64    2.13
    CE1OC01-4    9:11:32  2008-12-01 20:14  2008-12-02 05:25   1174.51    2.13
    CE1OC01-8    9:09:24  2008-12-01 20:14  2008-12-02 05:23   1172.49    2.13
    CE1OC01-1    9:05:55  2008-12-01 20:14  2008-12-02 05:20   1188.43    2.18
    CE1OC01-2    9:00:47  2008-12-01 20:14  2008-12-02 05:14   1184.52    2.19
    CE1OC01-7    8:54:06  2008-12-01 20:14  2008-12-02 05:08   1173.23    2.20
    CE1OC01-3    8:52:22  2008-12-01 20:14  2008-12-02 05:06   1179.91    2.22
    CE1OC01-10   8:45:09  2008-12-01 20:14  2008-12-02 04:59   1171.90    2.23
    CE1OC01-6    8:28:10  2008-12-01 20:14  2008-12-02 04:42   1172.46    2.31
    PLPO         0:25:16  2008-12-01 20:14  2008-12-01 20:39     92.70    3.67
                90:16:27  2008-12-01 20:14  2008-12-02 05:26  11856.91

  • Error Stored Procedure Upgrading database from SBO 2007A PL42 to SBO 8.8

    Hi,
    When executing the SBO 8.8 PL00  upgrade procedure for databases for an "old" database 2007A PL 42, upgrtade stops whith an error concerning a missing stored procedure "_TmSp_CorrectWrongDocLineNumberInOINMForIPF which is not found".
    The error details says that it occurs on "Stock Updates" step (just after Checks for Payment) and log file gives the following message here after.
    Does anybody have an idea how to upgrade the 2007A database ?
    Thanks
    Error message in log :
    Upgrade          Error              Technical     Failed to call Stored Procedure TmSpCorrectWrongDocLineNumberInOINMForIPF
    Edited by: Michel Diepart on Oct 24, 2009 10:55 PM

    Hi Paul,
    Thanks for your prompt response on a saturday and so late in the evening
    Concerning the Stored Procedure, I would like to delete it but ... it doesn't exist and that's the problem...
    No such procedure in my database (programmability / Stored procedurefolder in SQL).
    When searching on the forum about this stored procedure, no result.
    Does this stored procedure have a link with something regarding adjustment in stock ?
    "Stock updates" : what table is it in SAP B1 ? If necessary I could save the table, create a sql query to re-create it after delete and try to restart the upgrade procedure but I don't know what is the related table...
    I will also try with another 2007A PL 42 database to see if it works.
    Do you know if anybody has already try to upgrade a 2007 PL42 dabase to SBO 8.8 ?
    Best regards

  • Getting result set from stored procedures in database controls in weblogic

    I am calling a stored procedure from database control which actually returns a result set
    when i call the stored procedure like
    * @jc:sql statement="call PROC4()"
    ResultSet sampleProc() throws SQLException;
    it gives me exception saying
    "weblogic.jws.control.ControlException: Method sampleProc is DML but does not return void or int"
    I would appreciate any help
    Thanks,
    Uma

    Thanks for you reply!
    1) The stored procedure has head
    CREATE OR REPLACE PROCEDURE X_OWNER.DISPLAY_ADDRESS
    cv_1 IN OUT SYS_REFCURSOR
    AS
    err_msg VARCHAR2(100);
    BEGIN
    --Adaptive Server has expanded all '*' elements in the following statement
    OPEN cv_1 FOR
    Select ...
    commit;
    EXCEPTION
    WHEN OTHERS THEN
    err_msg := SQLERRM;
    dbms_output.put_line (err_msg);
    ROLLBACK;
    END;
    If I only run select .. in DBArtisan, it display all 2030,000 rows in 3:44 minutes
    2) But when call stored procedure, it will take 80-100 minutes .
    3) The stored procedure is translated from sybase using migration tools, it's very simple, in sybase it just
    CREATE PROCEDURE X_OWNER.DISPLAY_ADDRESS
    AS
    BEGIN
    select ..
    The select part is exact same.
    4) The perl code is almost exact same, except the query sql:
    sybase verson: my $sql ="exec DISPLAY_ADDRESS";
    and no need bind the cursor parameter.
    This is batch job, we create a file with all information, and ftp to clients everynight.
    Thanks!
    Rulin

  • Problem with import of split tables.

    We are creating copy of our  production system. The
    system copy has a change of Operating system from HP-UX to Windows
    X86_64 and is also has unicode conversion. We followed the following
    steps:
    1) Performed post copy steps and pre-unicode conversion steps
    2) Split big tables using OSS note 1043380 - Efficient Table Splitting
    for Oracle Databases
    3) Copied *.WHR files to export directory and performed export using SAPinst
    4) Copied files export dump to new server.
    5) Created whr.txt manually and copied it to the DATA directory of
    export dump
    6) Started import using SAPinst
    Surprisingly the import does not consider the split tables but marks
    them as complete in the TSK files. Also the rows imported is 0 for
    those tables.
    The only error is:
    (RFF) ERROR: no entry for table "CDCLS" in "F:\export\ABAP\DATA\CDCLS-1.TOC"
    (IMP) INFO: import of CDCLS completed (0 rows) #20110122022015
    Why is SAPinst not importing the tables that are split. All table that were not split are importing successfully and R3load is patched.

    Resolved.
    this is a known issue which is due to the sql splitter tool which
    generates space characters at the end of the 'where' clause within the
    TSK-file.
    At the import that leads to the fact that the entries are not
    recognized and nothing is imported, so in the import log you get
    no entry for table "BALDAT" in "F:\ccpexport\CCP\ABAP\DATA\BALDAT-1.TOC"
    As a workarround I would like you to apply the following:
    1. remove all blank characters at the end of the WHR clause in the TSK
    -file, that is i.e.:
    D BALDAT I ok
    WHERE ("LOG_HANDLE" < '4iLKIlCIjLpX0000p5bqa0')__ <-- I have marked the
    two blank characters at the end of the line by '_'
    remove this characters, so that the thera are no blanks at the end of
    the line
    2. set the status in the corresponding TSK-file from 'ok' to 'err'
    3. repeat the import of the corresponding packages
    I would like to mention that as per note # 1043380 you should use
    both for the export and import:
    R3load 7.00 version built newer than June 18th, 2008
    - where this issue should be fixed.

  • How to search all columns of all tables in a database

    i need to search all columns of all tables in a database , i already write the code below , but i've got the error message below when run this script
    DECLARE
    cnt number;
    v_data VARCHAR2(20);
    BEGIN
    v_data :='5C4CA98EAC4C';
    FOR t1 IN (SELECT table_name, column_name FROM all_tab_cols where owner='admin' and DATA_TYPE='VARCHAR2') LOOP
    EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ' ||t1.table_name|| ' WHERE ' ||t1.column_name || ' = :1' INTO cnt USING v_data;
    IF cnt > 0 THEN
    dbms_output.put_line( t1.table_name ||' '||t1.column_name||' '||cnt );
    END IF;
    END LOOP;
    END;
    Error report:
    ORA-00933: SQL command not properly ended
    ORA-06512: at line 7
    00933. 00000 - "SQL command not properly ended"
    *Cause:   
    *Action:
    Any help please

    SQL solutions by Michaels
    michaels>  var val varchar2(5)
    michaels>  exec :val := 'as'
    PL/SQL procedure successfully completed.
    michaels>  select distinct substr (:val, 1, 11) "Searchword",
                    substr (table_name, 1, 14) "Table",
                    substr (t.column_value.getstringval (), 1, 50) "Column/Value"
               from cols,
                    table
                       (xmlsequence
                           (dbms_xmlgen.getxmltype ('select ' || column_name
                                                    || ' from ' || table_name
                                                    || ' where upper('
                                                    || column_name
                                                    || ') like upper(''%' || :val
                                                    || '%'')'
                                                   ).extract ('ROWSET/ROW/*')
                       ) t
    --        where table_name in ('EMPLOYEES', 'JOB_HISTORY', 'DEPARTMENTS')
           order by "Table"or
    11g upwards
    SQL> select table_name,
           column_name,
           :search_string search_string,
           result
      from (select column_name,
                   table_name,
                   'ora:view("' || table_name || '")/ROW/' || column_name || '[ora:contains(text(),"%' || :search_string || '%") > 0]' str
              from cols
             where table_name in ('EMP', 'DEPT')),
           xmltable (str columns result varchar2(10) path '.')
    TABLE_NAME                     COLUMN_NAME                    SEARCH_STRING                    RESULT   
    DEPT                           DNAME                          es                               RESEARCH 
    EMP                            ENAME                          es                               JAMES    
    EMP                            JOB                            es                               SALESMAN 
    EMP                            JOB                            es                               SALESMAN 
    4 rows selected.

  • How can i find statistics tables in my database?

    code{
    Upgrade Statistics Tables Created by the DBMS_STATS Package
    If you created statistics tables using the DBMS_STATS.CREATE_STAT_TABLE procedure, then upgrade these tables by executing the following procedure:
    EXECUTE DBMS_STATS.UPGRADE_STAT_TABLE('SYS','dictstattab');
    In the example, 'SYS' is the owner of the statistics table and 'dictstattab' is the name of the statistics table. Execute this procedure for each statistics table.
    the above is one of the post upgrade step for 11gR2. How can i find statistics tables in my database?

    You need to read the complete context of this manual upgrade step:
    Step 33
    Upgrade Statistics Tables Created by the DBMS_STATS Package
    If you created statistics tables using the DBMS_STATS.CREATE_STAT_TABLE procedure, then upgrade these tables by executing the following procedure:
    EXECUTE DBMS_STATS.UPGRADE_STAT_TABLE('SYS','dictstattab');
    In the example, 'SYS' is the owner of the statistics table and 'dictstattab' is the name of the statistics table. Execute this procedure for each statistics table.You need to do this step for any statistics table that you created manually with DBMS_STATS.CREATE_STAT_TABLE.
    Marcus

  • Datapump skipping partitioned tables in the database

    I have run expdp on Oracle 10.2.0.4.0 on AIX 5.6 Platform, the export runs well exporting rows in the database but when it comes to partitioned tables in the database it export no rows for all the partitioned tables. When I run a normal exp/imp the partitioned tables are exported with all their rows.
    I used the following commands:
    expdp system/****** dumpfile=export_data.dmp directory=DATA_PUMP_DIR full=y logfile=export_dump.log
    Output for expdp on partitioned table:
    . . exported "SCOTT"."DEPT":"DEPT_2003_P1" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P10" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P11" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P12" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P2" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P3" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P4" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P5" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P6" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P7" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P8" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P9" 0 KB 0 rows
    And for exp:
    exp system/****** file=export_dump.dmp full=y log=export_log1.log
    Result from the export log for partitioned tables:
    . . exporting partition DEPT_2005_P1 881080 rows exported
    . . exporting partition DEPT_2005_P2 1347780 rows exported
    . . exporting partition DEPT_2005_P3 2002962 rows exported
    . . exporting partition DEPT_2005_P4 2318227 rows exported
    . . exporting partition DEPT_2005_P5 3122371 rows exported
    . . exporting partition DEPT_2005_P6 3916020 rows exported
    . . exporting partition DEPT_2005_P7 4217100 rows exported
    . . exporting partition DEPT_2005_P8 4125915 rows exported
    . . exporting partition DEPT_2005_P9 1913970 rows exported
    . . exporting partition DEPT_2005_P10 1100156 rows exported
    . . exporting partition DEPT_2005_P11 786516 rows exported
    . . exporting partition DEPT_2005_P12 822976 rows exported
    I am not sure about this behavour from datapump, my database is more than 800GB and we want to migrate the database from AIX to LINUX.
    Thanks

    Sorry I just copied and pasted some extracts from my exp and expdp logs:
    For testing purposes I tried to run a datapump export of only 1 partitioned table in the database and its going through, but when I do the same on a full datapump export these partitioned tables are being exported with no rows.
    Export: Release 10.2.0.4.0 - 64bit Production on Tuesday, 02 August, 2011 12:18:47
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** dumpfile=DEPT.dmp tables=scott.dept logfile=dept1.log
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 48.50 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/COMMENT
    Processing object type TABLE_EXPORT/TABLE/RLS_POLICY
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/TRIGGER
    Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "SCOTT"."DEPT":"DEPT_2009_P6" 1.452 GB 7377736 rows
    . . exported "SCOTT"."DEPT":"DEPT_2009_P7" 1.363 GB 6935687 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P6" 1.304 GB 6656096 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P7" 1.410 GB 7300618 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P7" 1.296 GB 6641073 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P6" 1.328 GB 6863885 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P6" 1.158 GB 6568075 rows
    . . exported "SCOTT"."DEPT":"DEPT_2009_P5" 1.141 GB 5801822 rows
    . . exported "SCOTT"."DEPT":"DEPT_2011_P5" 1.162 GB 6027466 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P7" 1.100 GB 6214680 rows
    . . exported "SCOTT"."DEPT":"DEPT_2011_P6" 1.106 GB 5762303 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P5" 1.133 GB 5859492 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P5" 1.001 GB 5664315 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P5" 1.023 GB 5229356 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P8" 1.078 GB 5549666 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P8" 940.3 MB 5171379 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P8" 989.0 MB 4920276 rows
    . . exported "SCOTT"."DEPT":"DEPT_2009_P8" 918.6 MB 4553523 rows
    . . exported "SCOTT"."DEPT":"DEPT_2006_P6" 821.0 MB 5220879 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P4" 766.6 MB 3832262 rows
    . . exported "SCOTT"."DEPT":"DEPT_2006_P8" 747.9 MB 4753538 rows
    . . exported "SCOTT"."DEPT":"DEPT_2006_P7" 741.8 MB 4708242 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P4" 734.2 MB 3713567 rows
    . . exported "SCOTT"."DEPT":"DEPT_2005_P7" 661.4 MB 4217100 rows
    . . exported "SCOTT"."DEPT":"DEPT_2005_P8" 647.1 MB 4125915 rows
    . . exported "SCOTT"."DEPT":"DEPT_2011_P4" 677.8 MB 3428887 rows
    I also tried to run a normal schema by schema export with the normal exp system/password command the and got my dump file which is about 300GB, when I run the imp system/password command and specify fromuser=<system > and touser=<schemas_in_the_dumpfile> seperated by commas, it just comes up with this message:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export file created by EXPORT:V10.02.01 via conventional path
    import done in WE8ISO8859P9 character set and AL16UTF16 NCHAR character set
    Import terminated successfully without warnings.
    No tables are exported.
    If I specify the parameter imp system/password file=dept_export.dmp full=y log=dept_imp.log with the same dumpfile and it imports data from the dumpfile into my database.
    I am not sure what could be wrong with my dumpfile or my imp command and its parameters.

  • IBM DB2 to Oracle Database Migration Using SQL Developer

    Hi,
    We are doing migration of the whole database from IBM DB2 8.2 which is running in WINDOWS to Oracle 11g Database in LINUX.
    As part of pre-requisites we have installed the Oracle SQL Developer 4.0.1 (4.0.1.14.48) in Linux Server with JDK 1.7. Also Established a connection with Oracle Database.
    Questions:
    1) How can we enable the Third Party Database Connectivity in SQL Developer?
    I have copied the files db2jcc.jar and db2jcc_license_cu.jar from the IBM DB2 (Windows) to Oracle (Linux)
    2) Will these JAR files are universal drivers? will these jar files will support in Linux platform?
    3) I got a DB2 full privileged schema name "assistdba", Shall i create a new user with the same name "assistdba" in the Oracle Database & grant DBA Privillege? (This is for Repository Creation)
    4) We have around 35GB of data in DB2, shall i proceed with ONLINE CAPTURE during the migration?
    5) Do you have any approx. estimation of Time to migrate a 35 GB of data?
    6) In-case of any issue during the migration activity, shall i get an support from Oracle Team (We have a Valid Support ID)?
    7) What are all the necessary Test Cases to confirm the status of VALID Migration?
    Request you to share the relevant metalink documents!!!
    Kindly guide me in-order to go-ahead with the successful migration.
    Thanks in Advance!!!
    Nagu
    [email protected]

    Hi Klaus,
    Continued with the above posts - Now we are doing another database migration from IBM DB2 to Oracle, which is very less of data (Eg: 20 Tables & 22 Indexes).
    As like previous database migration, we have done the pre-requirement steps.
    DB Using SQL Developer
    Created Migration Repository
    Connected with the created User in SQL Developer
    Captured the Source Database
    Converted Captured Model to Oracle
    Before Translation Phase we have clicked on the "Proceed Summary"
    Captured Database Objects & Converted Database Objects has been created under PROJECT section.
    Here while checking the status of captured & converted database objects, It's showing the below chart as sample:
    OVERVIEW
    PHASE               TABLE DETAILS          TABLE PCT
    CAPTURE               20/20                              100%
    CONVERT               20/20                              100%
    COMPILE                 0/20                                   0%
    TARGET STATUS
    DESC_OBJECT_NAME
    SCHEMANAME
    OBJECTNAME
    STATUS
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:ARG_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H0INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H1INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H2INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H3INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H4INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H4INDEX02:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H5INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H7INDEX01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:H7INDEX02:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:MAPIREP1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:MAPISWIFT1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:MAPITRAN1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:OBJ_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:OPR_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:PRD_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:S1TABLE01:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:STMT_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:STM_I1:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    INDEX
    TRADEIN1
    SQLDEV:LINK:&SQLDEVPREF_TARGETCONN:null:TRADEIN1:INDEX:X0IAS39:oracle.dbtools.migration.workbench.core.ConnectionAwareDrillLink
    Missing
    We have seen only "Missing" in the chart, also we couldn't have any option to trace it in Log file.
    Only after the status is VALID, we can proceed with the Translation & Migration PHASE.
    Kindly help us how to approach this issue now.
    Thanks
    Nagu

Maybe you are looking for

  • Eurosign not displayed correctly when picked from database

    Hi .. I'm having a problem getting a eurosign stored in an Oracle Database to be displayed correctly in my Crystal Reports 11.5. When using the built in (preview) Crystal Viewer, the eurosign is displayed correctly, but when I want to print the repor

  • I just loaded iTunes 10.5 on my new windows 7 and cannot connect to the store

    It sees my phone, syncs, shows my tunes, but I get a blank screen when I try to connect to the store or the help page. When I click on "View My Account" from the Store tab, I get my account info, but only as a text list - no graphics. I ran diagnosti

  • Problem with X11

    Hi, I am using the Apple Mac Pro with Mac OS X tiger. I can't see any icon for x11 in the utilities, while installing x11 says it already exists on the disc. Can anyone help me with this situation? Apple Mac Pro   Mac OS X (10.4.6)  

  • Change dpi for Ruler Guides

    Hello. I'd like to use exact size in dots when making layout for my document. Ruler Guide set to "px" displays pixel values for 72dpi, but I'll print the document in 300dpi. I'd still like to measure dots exactly, is there a way to change Ruler Guide

  • Extending OrdMultiMedia

    I am trying to extend the OrdMultiMedia class similar to the provided ORDAudio, ORDImage and ORDVideo classes. However I get an error from JDeveloper : class document.ORDText should be declared abstract; it does not define method getContentLengthAPI(