Munky's SQL performance challenge [APEX Chart]

Hi,
I have a chart currently being used but it takes around 45 seconds to load (or worse). One of the tables (corcon01_hour_chg_totals) has almost 20 million rows. Currently I'm doing them as separate queries but I assume they could be placed into a single query with multiple values (?). This means that I have 4 queries, all relating to the same x-axis values but different y-axis values (1 uses left, the other 3 use right). Perhaps there is a way to make use of the 2 previous parts (15 and 16) rather than doing a new query to discover the total of the 2? Thanks for any help. Here's my current code:
Series 1 using left y-axis: http://www.pastebin.ca/1585574
Series 2 using right axis -- cpu usage of one node: http://www.pastebin.ca/1585575
Series 3 using right axis -- cpu usage of other node: http://www.pastebin.ca/1585577
Series 4 using right axis -- total cpu usage of both nodes: http://www.pastebin.ca/1585578
The following indexes are specified:
corcon01_hour_chg_totals indexes: http://i36.tinypic.com/296die0.png
corcon01_db_perform indexes: http://i34.tinypic.com/nff8g2.png
The function based ones relate to the part of query used e.g. IDX_ACTUAL_WEEK's expression is: TO_CHAR(TO_DATE(RPAD("TD_HR",10),'yyyy-mm-dd'),'iw')Lets see some magic :)
Mike

Mike
Be a happy boy!
If you want SQL tuning advice, then throw it on the PL/SQL and SQL forum and I'm sure we can help you - Mr Kulash, Alex, Karthick, BluShadow and especially Boneist shall be your friend. Just be sure that you have read about SQL tuning and can provide the DDL (I mean the creation scripts, not just DESCRIBE output - if you don't have SQL developer or Toad or wotnot then look up DBMS_METADATA.GET_DDL) for your tables and indexes and the execution plan of the queries, a 10046 trace file might be handy too (google it, 10053 could be required later if necessary).
You've had expert input from 2 Aces, neither of whom I would be comfortable in a debate with, I'm just a SQL monkey (just cause I'm not happy with a 10 sec load does not mean that you should not be). So you've already had good advice from the best - you can't beat that sir!
Oh, and you have a select max in one of your select clauses that doesn't reference owt in the actual query (the 'actual count' bit). It's always gonna be 1 row with one result given the nature of the sub-query so you might want to WITH blah that, materialize (just to test) and then select from the sub-factored temporary table... Don't forget to always run twice to test so that you can disregard parsing time as the execution plan is sitting pretty in the shared pool by the second time...
Cheers
Ben
Edited by: Munky on Sep 30, 2009 10:26 PM

Similar Messages

  • Customize Apex charts

    Hello,
    I'm an Oracle DBA thats slowly learning Apex. our company wants to use Apex for forms and various reports. I have absolutely no XML, Java, HTML or any such knowledge that would be usefule for webdesigners. i rely mostly on the settings Apex provides, but i'm learning a few tricks here and there.
    my question is, how to customize/edit charts? here are open issues:
    - i have a 3D stacked bar chart (flash chart). i want to be able to
    - add a line to this chart for the same data. the line can be a "trendline" or some fixed value. i saw a website (AnyChart) that offers this soultion, but have no idea how to incorporate this. in any case, i did some tests, and it didn't work.
    - changing the Legend: we want to be able to change the order of the legend, so that it fits the chart! seems like a simple thing, but Apex does not offer this. it shows the legend in reverse order to the chart.
    Also, the Legend seems to fix a capital letter to the first character of a word, the rest is small.
    ex: we want a legend definition "AA KCOO" and we get "Aa Kcoo". we define this in the SQL Query, but it doesn't accept this.
    i suppose all this can be changed in XML coding, but how, and more important, why? shouldn't a tool like this have that option of changing these things? the GUI tool seems so limited here. its nice that you can customize in XML, but i think these are missing options..
    can anyone help, or guide me to the right documentation?
    thanks,
    mike

    Hi Dimitri,
    thanks for the tip.
    first of all, getting the item in like you mentioned did not work. i tried with and without the single quotes
    without quotes (<line value=&P8_LINIE etc) gives me an "XML Error was malformed" message, and with the quotes (<line value='&P8_LINIE' etc) gives me a completely differnt number! it draws a line at the very top of the scale, fixed at 160 in my case.
    i did another tes, redoing the flash chart in a different region. exact same error, only this time, since i set my y- axis max value at 180, then the line was drawn at the top again - so it seems like the line is automatically drawn at the place where the chart value was defined for maximum...
    oh well, no idea where this comes from...
    as far as Anychart goes - i took a look at the website - it costs a fair bit for this newer version! for one developer license 500 USD, for 4 developers 1000 USD!
    my question is: will a future Apex have the newer version? would it make sense to buy this or should we wait until Apex 4.0 comes out?
    i have since discovered another thorny issue with Apex charts, but i will open a new thread for that. see my "remove marker from 2D Line flash chart" thread
    so far, i'm quite frustrated at the lack of modification options available to this GUI tool. perhaps in future versions, Apex graphs will be better...
    mike

  • Help needed in SQL performance - Using CASE in SQL statement versus 2 query

    Hi,
    I have a requirement to find count from a bunch of tables.
    The SQL I have gives the count of all members.
    I have created 2 queries to find count of active and inactive members.
    The key difference is only the active dates.
    Each query takes 20 seconds to execute.
    I modified the SQL to use CASE statement in the SELECT.
    So after the data is fetched the CASE statement will evaluate the active date and gives 2 counts (active and inactive)
    Is it advisable to use this approach. Will CASE improve SQL performance ? I have to justify this.
    Please let me know your thoughts.
    Thanks,
    J

    Hi,
    If it can be done in single SQL do it in single SQL.
    You said:
    Will CASE improve SQL performance There can be both cases to prove if the performance is better or worse.
    In your case you should tell us how it is.
    Regards,
    Bhushan

  • SQL Performance issue: Using user defined function with group by

    Hi Everyone,
    im new here and I really could need some help on a weird performance issue. I hope this is the right topic for SQL performance issues.
    Well ok, i create a function for converting a date from timezone GMT to a specified timzeone.
    CREATE OR REPLACE FUNCTION I3S_REP_1.fnc_user_rep_date_to_local (date_in IN date, tz_name_in IN VARCHAR2) RETURN date
    IS
    tz_name VARCHAR2(100);
    date_out date;
    BEGIN
    SELECT
    to_date(to_char(cast(from_tz(cast( date_in AS TIMESTAMP),'GMT')AT
    TIME ZONE (tz_name_in) AS DATE),'dd-mm-yyyy hh24:mi:ss'),'dd-mm-yyyy hh24:mi:ss')
    INTO date_out
    FROM dual;
    RETURN date_out;
    END fnc_user_rep_date_to_local;The following statement is just an example, the real statement is much more complex. So I select some date values from a table and aggregate a little.
    select
    stp_end_stamp,
    count(*) noi
    from step
    where
    stp_end_stamp
    BETWEEN
    to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')      
    AND
    to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
    group by
    stp_end_stampThis statement selects ~70000 rows and needs ~ 70ms
    If i use the function it selects the same number of rows ;-) and takes ~ 4 sec ...
    select
    fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin'),
    count(*) noi
    from step
    where
    stp_end_stamp
    BETWEEN
    to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')      
    AND
    to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
    group by
    fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin')I understand that the DB has to execute the function for each row.
    But if I execute the following statement, it takes only ~90ms ...
    select
    fnc_user_rep_date_to_gmt(stp_end_stamp,'Europe/Berlin','ny21654'),
    noi
    from
    select
    stp_end_stamp,
    count(*) noi
    from step
    where
    stp_end_stamp
    BETWEEN
    to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')      
    AND
    to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
    group by
    stp_end_stamp
    )The execution plan for all three statements is EXACTLY the same!!!
    Usually i would say, that I use the third statement and the world is in order. BUT I'm working on a BI project with a tool called Business Objects and it generates SQL, so my hands are bound and I can't make this tool to generate the SQL as a subselect.
    My questions are:
    Why is the second statement sooo much slower than the third?
    and
    Howcan I force the optimizer to do whatever he is doing to make the third statement so fast?
    I would really appreciate some help on this really weird issue.
    Thanks in advance,
    Andi

    Hi,
    The execution plan for all three statements is EXACTLY the same!!!Not exactly. Plans are the same - true. They uses slightly different approach to call function. See:
    drop table t cascade constraints purge;
    create table t as select mod(rownum,10) id, cast('x' as char(500)) pad from dual connect by level <= 10000;
    exec dbms_stats.gather_table_stats(user, 't');
    create or replace function test_fnc(p_int number) return number is
    begin
        return trunc(p_int);
    end;
    explain plan for select id from t group by id;
    select * from table(dbms_xplan.display(null,null,'advanced'));
    explain plan for select test_fnc(id) from t group by test_fnc(id);
    select * from table(dbms_xplan.display(null,null,'advanced'));
    explain plan for select test_fnc(id) from (select id from t group by id);
    select * from table(dbms_xplan.display(null,null,'advanced'));Output:
    PLAN_TABLE_OUTPUT
    Plan hash value: 47235625
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   1 |  HASH GROUP BY     |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   2 |   TABLE ACCESS FULL| T    | 10000 | 30000 |   159   (1)| 00:00:02 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$1
       2 - SEL$1 / T@SEL$1
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          FULL(@"SEL$1" "T"@"SEL$1")
          OUTLINE_LEAF(@"SEL$1")
          ALL_ROWS
          OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
          IGNORE_OPTIM_EMBEDDED_HINTS
          END_OUTLINE_DATA
    Column Projection Information (identified by operation id):
       1 - (#keys=1) "ID"[NUMBER,22]
       2 - "ID"[NUMBER,22]
    34 rows selected.
    SQL>
    Explained.
    SQL>
    PLAN_TABLE_OUTPUT
    Plan hash value: 47235625
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   1 |  HASH GROUP BY     |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   2 |   TABLE ACCESS FULL| T    | 10000 | 30000 |   159   (1)| 00:00:02 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$1
       2 - SEL$1 / T@SEL$1
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          FULL(@"SEL$1" "T"@"SEL$1")
          OUTLINE_LEAF(@"SEL$1")
          ALL_ROWS
          OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
          IGNORE_OPTIM_EMBEDDED_HINTS
          END_OUTLINE_DATA
    Column Projection Information (identified by operation id):
       1 - (#keys=1) "TEST_FNC"("ID")[22]
       2 - "ID"[NUMBER,22]
    34 rows selected.
    SQL>
    Explained.
    SQL> select * from table(dbms_xplan.display(null,null,'advanced'));
    PLAN_TABLE_OUTPUT
    Plan hash value: 47235625
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   1 |  HASH GROUP BY     |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   2 |   TABLE ACCESS FULL| T    | 10000 | 30000 |   159   (1)| 00:00:02 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$F5BB74E1
       2 - SEL$F5BB74E1 / T@SEL$2
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          FULL(@"SEL$F5BB74E1" "T"@"SEL$2")
          OUTLINE(@"SEL$2")
          OUTLINE(@"SEL$1")
          MERGE(@"SEL$2")
          OUTLINE_LEAF(@"SEL$F5BB74E1")
          ALL_ROWS
          OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
          IGNORE_OPTIM_EMBEDDED_HINTS
          END_OUTLINE_DATA
    Column Projection Information (identified by operation id):
       1 - (#keys=1) "ID"[NUMBER,22]
       2 - "ID"[NUMBER,22]
    37 rows selected.

  • Sql problem in apex LOV

    I am not able to use this SQL statement for apex LOV, any ideas?
    select username, id
    from abusers
    order by decode(id,0, NULL ,username) NULLS FIRST ;
    I get the following error message:
    1 error has occurred
    * LOV query is invalid, a display and a return value are needed, the column names need to be different. If your query contains an in-line query, the first FROM clause in the SQL statement must not belong to the in-line query.
    Re: How to concatenate two sql statements?

    Hi
    Remove the semi-colon at the end.
    Cheers
    Ben
    http://www.munkyben.wordpress.com
    Don't forget to mark replies helpful or correct ;)

  • SQL Loader and APEX

    Is it possible to use the SQL Loader within APEX? I want to have a table updated from a csv file... normally I would use the WWV_FLOW_FILES table, but with this file it mixes tabs and commas within the data, so creating a reliable upload script is getting to be a complete nightmare. I'd like to leverage the sql loader so I can specify the delimiter, optional quotes etc. The file will be in the same location on the file system so a scheduled, automated process would be perfect.
    If so, does anyone know of a tutorial that might help me out?

    I really hate computers. I do. I should know by now that it would of course install properly once support is engaged.
    It installed properly just now. I was installing to get the error message to send to you. No changes were made yesterday, it's the same schema, same options for installing. I didn't restart any services or reboot the server.
    It just works now.
    I've been working with computers since 1982 and their capacity for knowing when support people are around continues to astonish me.
    I am getting 'no data found' but I was really only installing it to get the error message, so I may have skipped a step.
    Thanks all for the prompt replies.

  • Calling Sql Loader In Apex

    Hi All,
    Is there any way to call the sql loader in Apex to load the table with csv data.
    Thanks in advance
    Dhan

    Do you need the application to load data for users as a function or are you just trying to load some data. Application Express has loading/unloading utilities from the main workspace menu that you can use for one time loads.

  • [sql performance] inline view , group by , max, join

    Hi. everyone.
    I have a question with regard to "group by" inline view ,
    max value, join, and sql performance.
    I will give you simple table definitions in order for you
    to understand my intention.
    Table A (parent)
    C1
    C2
    C3
    Table B (child)
    C1
    C2
    C3
    C4 number type(sequence number)
    1. c1, c2, c3 are the key columns of tabla A.
    2. c1, c2, c3, c4 are the key columns of table B.
    3. table A is the parent table of Table B.
    4. c4 column of table b is the serial number.
    (c4 increases from 1 by "1" regarding every (c1,c2,c3)
    the following is the simple example of the sql query.
    select .................................
    from table_a,
    (select c1, c2, c3, max(c4)
    from table_b
    group by c1, c2, c3) table_c
    where table_a.c1 = table_c.c1
    and table_a.c2 = table_c.c2
    and table_a.c3 = table_c.c3
    The real query is not simple as above. More tables come
    after "the from clause".
    Table A and table B are big tables, which have more than
    100,000,000 rows.
    The response time of this sql is very very slow
    as everyone can expect.
    Are there any solutions or sql-tips about the late response-time?
    I am considering adding a new column into "Table B" in
    order to identify the row, which has max serial number.
    At this point, I am not sure adding a column is a good
    thing in terms of every aspect.
    I will be waiting for your advice and every response
    will be appreciated even if it is not the solution.
    Have a good day.
    HO.
    Message was edited by:
    user507290

    For such big sources check that
    1) you use full scans, hash joins or at least merge joins
    2) you scan your source data as less as possible. In the best case each necessary table only once (for example not using exists clause to effectively scan all table via index scan).
    3) how much time you are spending on sorts and hash joins (either from v$session_longops directly or some tool that visualises this info). If you are using workarea_size_policy = auto, probably you can switch to manual for this particular select and adjust sort_area_size and hash_area_size big enough to do as less as possible sorts on disk
    4) if you have enough free resources i.e. big box probably you can consider using some parallelism
    5) if your full scans are taking big time check what is your db_file_multiblock_read_count, probably increasing it for this select will give some gain.
    6) run trace and check on what are you waiting for
    7) most probably your problem is IO bound so probably you can do something from OS side to make IO faster
    8) if your query now is optimized as much as you can, disks are running as mad and you are using all RAM then probably it is the most you can get out of your box :)
    9) if nothing helps then you can start thinking about precalculations either using your idea about derived column or some materialized views.
    10) I hope you have a test box and at least point (9) do firstly on it and see whether it helps.
    Gints Plivna
    http://www.gplivna.eu

  • How to use the d3.js library with Apex Charting

    Hello.
    I am using Apex 4.1.0 with Oracle 11gR2 and Oracle App Server (mod_plsql).
    I'm trying to incorporate the d3.js library (a visulaization framework) in my Apex charts but am not having much success.
    I found this article in which David Mann uses the library within an Apex 4.x application:
    http://ba6.us/d3js_application_express_basic_dynamic_action
    I replicated his exact steps in my own application but without success. I do not see how he was able to get his application to work. Indeed, the tutorial does not even use a dynamic action despite what the article title says.
    Has any one used the d3.js library with their Apex application. If so, would you be willing to share how you went about it?
    Thank you very much.
    Elie

    EEG wrote:
    Hello fac586.
    Thank you very much for responding/helping.
    In the article I referenced I did note David's statement about using a "modern" browser with d3.js (one that recognizes css3 syntax); otherwise, the framework will not respond. And so, I was careful to run my Apex application in IE9.x as well in Firefox 16.x. But all I see is an empty region with a title. No chart. Nothing.
    I suspect my one of my problems here is in getting the chart to refresh every "n" seconds. For this, I think the dynamic action would be used, though I'm not sure how to go about doing so.That's included in the sample code (line 99). Strangely Dynamic Actions don't seem to include a native timer event...however there is a plug-in.
    More problematic, though, is that I am not seeing any chart whatsoever in the region. I would have expected to see some chart data, even if it is not automagically refreshing.
    I've created my example in my EEG workspace on apex.oracle.com:
    Workspace: EEG
    Username: [email protected]
    Password: galaxy (note: all lowercase)
    Please see application 27083 called Elie_Goodies, page 25. This page has an associated tab called, appropriately enough, "d3.js Library". The Safari console showed a couple of JavaScript errors.
    1. The URL used to include the d3 code in the blog article:
    <script type="text/javascript" src="http://mbostock.github.com/d3/d3.js"></script>is returning HTML, not JavaScript. Changing it to that given on the d3js.org site:
    <script src="http://d3js.org/d3.v2.js"></script>includes the correct script.
    2. There was a syntax error in a script in the Run Code region. I think there was some kind of issue arising from copying from the blog article: it looked like line endings hadn't been respected as the code wasn't formatted properly. Pasting it from the blog into Coda's editor and then into the APEX Region Source text area fixed the format, and it then ran first time.
    Thanks for the heads-up. I'll also be looking further into d3.

  • No of columns in a table and SQL performance

    How does the table size effects sql performance?
    I am comparing 2 tables , with same number of rows(54 million rows) ,
    table1(columns a,b,c,d,e,f..) has 40 columns
    table2 (columns (a,b,c,d)
    SQL uses columns a,b.
    SQL using table2 runs in 1 sec.
    SQL using table1 runs in 30 min.
    Can any one please let me know how the table size , number of columns in table efects the performance of SQL's?
    Thanks
    jeevan.

    user600431 wrote:
    This is a general question. I just want to compare table with more columns and table with less columns with same number of rows .
    I am finding that table with less columns is good in performance , than the table with more columns.
    Assuming there are no row chains , will there be any difference in performance with the number of columns in a table.Jeevan,
    the question is not how many columns your table has, but how large your table segment is. If your query runs a full table scan it has to read through the whole table segment, so in that case the size of the table matters.
    A table having more columns potentially has a larger row size than a table with less columns but this is not a general rule. Think of large columns, e.g. varchar2 columns, think of blank (NULL) columns and you can easily end up with a table consisting of a single column taking up more space per row than a table with 200 columns consisting only of varchar2(1) columns.
    Check the DBA/ALL/USER_SEGMENTS view to determine the size of your two table segments. If you gather statistics on the tables then the dictionary will contain information about the average row size.
    If your query is using indexes then the size of the table won't affect the query performance significantly in many cases.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • SQL PERFORMANCE SQL ANALYZER of 11g

    I am using 11g on Windows 2000, I want run SQL PERFORMANCE ANALYZER to see the impact of parameter change on some sql’s. Currently, I am using it in a test environment, but eventually I want to apply t to production environment.
    Let us say I want to see the effect of different values db_file_muntilbock_readcount.
    When I run this in my database, will the values changed impact only the session where I am running sol performance analyzer, or will it impact any other sessions, which are accessing the same database instance. I think, it impacts only the session where SQL Performance analyzer is being run, but want to make sure that is the case?
    Appreciate your feedback.

    I think, it impacts only the session where
    SQL Performance analyzer is being run, but want to
    make sure that is the case?The database instance is part of a larger 'system' which includes a fixed set of physical resources. Your session, and every other session, work within the constraints of those resources. When you change the current SQL statement, you will be moving the balance between those resources.
    For example, a disk can only respond to one access request at a time. A memory location can be used for one piece of data at a time. A DB cache buffer can only reflect one block at a time. There are a lot of 'points of serialization'.
    Although the major impact should be on the current session, there will be some impact on every other session in the system.
    BY the way, there is an 'edit' button available to you for every post you create. As a courtesy, you could edit the title of the duplicate and let us know it is indeed a duplicate - or you could edit that other thread to ask that other question you were going to ask.

  • SQL Performance and Security

    Help needed here please. I am new to this concept and i am working on a tutorial based on SQL performance and security. I have worked my head round this but now i am stuck.
    Here is the questions:
    1. Analyse possible performance problems, and suggest solutions for each of the following transactions against the database
    a) A manager of a project needs to inspect total planned and actual hours spent on a project broken down by activity.
    e.g     
    Project: xxxxxxxxxxxxxx
    Activity Code          planned     actual (to date)
         1          20          25
         2          30          30
         3          40          24
    Total               300          200
    Note that actual time spent on an activity must be calculated from the WORK UNIT table.
    b)On several lists (e.g. list or combo boxes) in the on-line system it is necessary to identify completed, current, or future projects.
    2. Security: Justify and implement solutions at the server that meet the following security requirements
    (i)Only members of the Corporate Strategy Department (which is an organisation unit) should be able to enter, update and delete data in the project table. All users should be able to read this information.
    (ii)Employees should only be able to read information from the project table (excluding the budget) for projects they are assigned to.
    (iii)Only the manager of a project should be able to update (insert, update, delete) any non-key information in the project table relating to that project.
    Here is the project tables
    set echo on
    * Changes
    * 4.10.00
    * manager of employee on a project included in the employee on project table
    * activity table now has compound key, based on ID dependence between project
    * and activity
    drop table org_unit cascade constraints;
    drop table project cascade constraints;
    drop table employee cascade constraints;
    drop table employee_on_project cascade constraints;
    drop table employee_on_activity cascade constraints;
    drop table activity cascade constraints;
    drop table activity_order cascade constraints;
    drop table work_unit cascade constraints;
    * org_unit
    * type - for example in lmu might be FACULTY, or SCHOOL
    CREATE TABLE org_unit
    ou_id               NUMBER(4)      CONSTRAINT ou_pk PRIMARY KEY,
    ou_name          VARCHAR2(40)     CONSTRAINT ou_name_uq UNIQUE
                             CONSTRAINT ou_name_nn NOT NULL,
    ou_type          VARCHAR2(30) CONSTRAINT ou_type_nn NOT NULL,
    ou_parent_org_id     NUMBER(4)     CONSTRAINT ou_parent_org_unit_fk
                             REFERENCES org_unit
    * project
    CREATE TABLE project
    proj_id          NUMBER(5)     CONSTRAINT project_pk PRIMARY KEY,
    proj_name          VARCHAR2(40)     CONSTRAINT proj_name_uq UNIQUE
                             CONSTRAINT proj_name_nn NOT NULL,
    proj_budget          NUMBER(8,2)     CONSTRAINT proj_budget_nn NOT NULL,
    proj_ou_id          NUMBER(4)     CONSTRAINT proj_ou_fk REFERENCES org_unit,
    proj_planned_start_dt     DATE,
    proj_planned_finish_dt DATE,
    proj_actual_start_dt DATE
    * employee
    CREATE TABLE employee
    emp_id               NUMBER(6)     CONSTRAINT emp_pk PRIMARY KEY,
    emp_name          VARCHAR2(40)     CONSTRAINT emp_name_nn NOT NULL,
    emp_hiredate          DATE          CONSTRAINT emp_hiredate_nn NOT NULL,
    ou_id               NUMBER(4)      CONSTRAINT emp_ou_fk REFERENCES org_unit
    * activity
    * note each activity is associated with a project
    * act_type is the type of the activity, for example ANALYSIS, DESIGN, BUILD,
    * USER ACCEPTANCE TESTING ...
    * each activity has a people budget , in other words an amount to spend on
    * wages
    CREATE TABLE activity
    act_id               NUMBER(6),
    act_proj_id          NUMBER(5)     CONSTRAINT act_proj_fk REFERENCES project
                             CONSTRAINT act_proj_id_nn NOT NULL,
    act_name          VARCHAR2(40)     CONSTRAINT act_name_nn NOT NULL,
    act_type          VARCHAR2(30)     CONSTRAINT act_type_nn NOT NULL,
    act_planned_start_dt     DATE,
    act_actual_start_dt      DATE,
    act_planned_end_dt     DATE,
    act_actual_end_dt     DATE,
    act_planned_hours     number(6)     CONSTRAINT act_planned_hours_nn NOT NULL,
    act_people_budget     NUMBER(8,2)      CONSTRAINT act_people_budget_nn NOT NULL,
    CONSTRAINT act_pk PRIMARY KEY (act_id, act_proj_id)
    * employee on project
    * when an employee is assigned to a project, an hourly rate is set
    * remember that the persons manager depends on the project they are on
    * the implication being that the manager needs to be assigned to the project
    * before the 'managed'
    CREATE TABLE employee_on_project
    ep_emp_id          NUMBER(6)     CONSTRAINT ep_emp_fk REFERENCES employee,
    ep_proj_id          NUMBER(5)     CONSTRAINT ep_proj_fk REFERENCES project,
    ep_hourly_rate      NUMBER(5,2)      CONSTRAINT ep_hourly_rate_nn NOT NULL,
    ep_mgr_emp_id          NUMBER(6),
    CONSTRAINT ep_pk PRIMARY KEY(ep_emp_id, ep_proj_id),
    CONSTRAINT ep_mgr_fk FOREIGN KEY (ep_mgr_emp_id, ep_proj_id) REFERENCES employee_on_project
    * employee on activity
    * type - for example in lmu might be FACULTY, or SCHOOL
    CREATE TABLE employee_on_activity
    ea_emp_id          NUMBER(6),
    ea_proj_id          NUMBER(5),
    ea_act_id          NUMBER(6),      
    ea_planned_hours      NUMBER(3)     CONSTRAINT ea_planned_hours_nn NOT NULL,
    CONSTRAINT ea_pk PRIMARY KEY(ea_emp_id, ea_proj_id, ea_act_id),
    CONSTRAINT ea_act_fk FOREIGN KEY (ea_act_id, ea_proj_id) REFERENCES activity ,
    CONSTRAINT ea_ep_fk FOREIGN KEY (ea_emp_id, ea_proj_id) REFERENCES employee_on_project
    * activity order
    * only need a prior activity. If activity A is followed by activity B then
    (B is the prior activity of A)
    CREATE TABLE activity_order
    ao_act_id          NUMBER(6),      
    ao_proj_id          NUMBER(5),
    ao_prior_act_id      NUMBER(6),
    CONSTRAINT ao_pk PRIMARY KEY (ao_act_id, ao_prior_act_id, ao_proj_id),
    CONSTRAINT ao_act_fk FOREIGN KEY (ao_act_id, ao_proj_id) REFERENCES activity (act_id, act_proj_id),
    CONSTRAINT ao_prior_act_fk FOREIGN KEY (ao_prior_act_id, ao_proj_id) REFERENCES activity (act_id, act_proj_id)
    * work unit
    * remember that DATE includes time
    CREATE TABLE work_unit
    wu_emp_id          NUMBER(5),
    wu_act_id          NUMBER(6),
    wu_proj_id          NUMBER(5),
    wu_start_dt          DATE CONSTRAINT wu_start_dt_nn NOT NULL,
    wu_end_dt          DATE CONSTRAINT wu_end_dt_nn NOT NULL,
    CONSTRAINT wu_pk PRIMARY KEY (wu_emp_id, wu_proj_id, wu_act_id, wu_start_dt),
    CONSTRAINT wu_ea_fk FOREIGN KEY (wu_emp_id, wu_proj_id, wu_act_id)
              REFERENCES employee_on_activity( ea_emp_id, ea_proj_id, ea_act_id)
    /* enter data */
    start ouins
    start empins
    start projins
    start actins
    start aoins
    start epins
    start eains
    start wuins
    start pmselect
    I have the tables containing ouins and the rest. email me on [email protected] if you want to have a look at the tables.

    Answer to your 2nd question is easy. Create database roles for the various groups of people who are allowed to access or perform various DML actions.
    The assign the various users to these groups. The users will be restricted to what the roles are restricted to.
    Look up roles if you are not familiar with it.

  • How to improve my pls/sql performance tunning skills

    Hi All , I would like to learn more about pl/sql performance tunning , where or how can i get more knowledge in this area ?
    Is there any tutorials which can help me to understand the Explain plan, Dbms_Profiler, Dbms_Advisor more etc ........Thanks . Bcj

    Explain plan
    http://www.psoug.org/reference/explain_plan.html
    DBMS_PROFILER (10g)
    http://www.psoug.org/reference/dbms_profiler.html
    DBMS_HPROF (11g)
    http://www.psoug.org/reference/dbms_hprof.html
    DBMS_ADVISOR
    http://www.psoug.org/reference/dbms_advisor.html
    DBMS_MONITOR
    http://www.psoug.org/reference/dbms_monitor.html
    DBMS_SUPPORT
    http://www.psoug.org/reference/dbms_support.html
    DBMS_TRACE
    http://www.psoug.org/reference/dbms_trace.html
    DBMS_SQLTUNE
    http://www.psoug.org/reference/dbms_sqltune.html

  • Best Performance for apex over Linux+HW RAID

    Dear Experts,
    Given:
    -- Server Specs:
    ---- HP ProLiant ML370 G6 E5540
    ---- Processor name: Intel® Xeon® E5540 (4 core, 2.53 GHz, 8MB L3, 80W)
    ---- Storage Controller: (1) Smart Array P410i/256MB
    ---- 8 SAS HardDisks with 320 GB for each
    ---- 12 GB RAM
    Required:
    1)) What are the best practice bring the best performance to Apex for the following solution stack
    ---- Oracle Linux 6.3 x86_64
    ---- Grid Infrastructure 11g R2
    ---- Database 11g R2
    ---- Oracle Apex 4.2.1
    2)) What is the best hardware RAID configuration?
    3)) What is the maximum concurrent users of applications on apex according to the above specs+software stack ?
    Regards
    Mahmoud

    Dear Alvaro
    Thank you for your response.
    The current status
    When I entered HP ACU from bootable Smart Start CD, I found under the configuration of Smart Array P410i in Embedded Slot, the following:
    -Smart Array P410i in Embedded Slot
    ---Internal Drive Cage at Port 1I : Box 1
    ------300 GB 2-Part SAS Drive at Part 1I : Box 1 : Bay 1
    ------300 GB 2-Part SAS Drive at Part 1I : Box 1 : Bay 2
    ------300 GB 2-Part SAS Drive at Part 1I : Box 1 : Bay 3
    ------300 GB 2-Part SAS Drive at Part 1I : Box 1 : Bay 4
    ---Internal Drive Cage at Port 2I : Box 1
    ------300 GB 2-Part SAS Drive at Part 2I : Box 1 : Bay 5
    ------300 GB 2-Part SAS Drive at Part 2I : Box 1 : Bay 6
    ------300 GB 2-Part SAS Drive at Part 2I : Box 1 : Bay 7
    ------300 GB 2-Part SAS Drive at Part 2I : Box 1 : Bay 8
    The questions now:
    1) Do you recommend the following configuration for RAID's logical arrays:
    Using Logical View:
    SAS Array A - 4 Logical Drive(s)
    --- Logical Drive 1 (50.0 GB, RAID 1+0) ---> for OS
    --- Logical Drive 2 (24.0 GB, RAID 0) ---> for SWAP
    --- Logical Drive 3 (200.0 GB, RAID 1+0) ---> for ASM DATA
    --- Logical Drive 4 (296.7 GB, RAID 1+0) ---> for ASM FRA
    SAS Array B - 1 Logical Drive(s)
    --- Logical Drive 5 (1.1 TB, RAID 0) ---> for non-critical applications and sources
    2) What are your recommendations for the following steps to reach oracle apex 4.2 installed?
    Best Regards
    Mahmoud

  • EXEC SQL PERFORMING

    when run the following program ,it reported error as
    "the error occurred in the current  database connection "DEFAULT".
    how to solve the problem ?
    =================================
    DATA : BEGIN OF WA,
      CLIENT(3),
      ARG1(3),
      ARG2(3),
      END OF WA.
    DATA F3  VALUE '1'.
    EXEC SQL PERFORMING LOOP_OUTPUT.
      SELECT CLIENT , ARG1 INTO  :WA FROM TABLE_001 WHERE ARG2 = :F3.
    ENDEXEC.
    FORM LOOP_OUTPUT.
      WRITE : / WA-CLIENT,WA-ARG2.
    ENDFORM.
    ==================================

    hi
       try
        SELECT * FROM TABLE_001 INTO CORRESPONDING FIELDS OF WA WHERE ARG2 = F3.
    KUMAR

Maybe you are looking for

  • No longer marking songs/podcasts as "Played"

    I count on my shuffle updating iTunes (windows) that a podcast was played. This way I know which I can delete from the podcast directory. Since going to the latest version of iTunes when I update the shuffle it is not feeding back the information tha

  • Accented characters refused ...

    Hi, I am french and many of my calendar names have accented characters. For I don't know the reason, today I cannot modify or add new calendars with accented character names. A clue to this one ? Regards.

  • INSTALLING ITUNES : URGENT ASSISTANCE

    Hi , well i got the new red ipod nano as a xmas present and i can tuse it yet due to the fact when i try to install itunes 7 it says "The install script engine on this machine is older than the version required to run this setup. If available, please

  • URL for viewing local files

    I'd been using Browserlab successfully today, viewing local files from Dreamweaver CS5. I logged out and then returned later in the day and am now having problems as the Browserlab window is failing to  automatically load the local file's URL in the

  • Large disk requirements for 7.0 sap_basis sp19 (sapkb70019)!

    Dear customers, We would like to make you aware of a problem with SAP_BASIS support package SAPKB70019 (SP19 of SAP NetWeaver 7.0 aka. NetWeaver 2004s) on IBM i. Details are described in <a href=https://service.sap.com/sap/support/notes/1366799>SAP N