Insert select statement or insert in cursor

hi all,
i need a performance compare for: insert operation in cursor one by one and select statement insert,
for example:
1. insert into TableA (ColumA,ColumnB) select A, B from TableB;
2. cursor my_cur is select A, B from TableB
for my_rec in my_cur loop
insert into TableA (ColumA,ColumnB) values (my_rec.A, my_rec.B);
end loop;
also "bulk collect into" can be used.
Which one has a better performance?
Thanks for your help,
kadriye

What's stopping you from making 100,000 rows of test data and trying it for yourself?
Edit: I was bored enough to do it myself.
Starting insert as select 22-JUL-08 11.43.19.544000000 +01:00
finished insert as select. 22-JUL-08 11.43.19.825000000 +01:00
starting cursor loop 22-JUL-08 11.43.21.497000000 +01:00
finished cursor loop 22-JUL-08 11.43.35.185000000 +01:00
The two second gap between the two is for the delete.
Message was edited by:
Dave Hemming

Similar Messages

  • Smart scan not working with Insert Select statements

    We have observed that smart scan is not working with insert select statements but works when select statements are execute alone.
    Can you please help us to explain this behavior?

    There is a specific exadata forum - you would do better to post the question there: Exadata
    I can't give you a definitive answer, but it's possible that this is simply a known limitation similar to the way that "Create table as select" won't run the select statement the same way as the basic select if it involves a distributed query.
    Regards
    Jonathan Lewis

  • Select statement from "v_filename_txt" inside cursor

    Hi,
    I've got a store procedure procedure that create parametric tables to upload from csv files. I have collect into v_filename_txt variable the parametric table name where I would like to make some select.
    I have write the following code but I can't use a variable into a select statement :
    select field
    FROM TABLE '''||v_filename_txt||''' ;
    Does anybody know how to include the precedent statement into a cursor to retrive recods ?
    Paolo.

    You could rcreate an external table. Then select the values from this external table.
    You can switch the source of the external table to a new file with the
    alter table myExtTable location ('mySecondCsvFIle.csv');
    command.
    Of cause this works only when all the files have the same structure. But the Idea is: mimic the structure of the csv file sin the database with a external table command. Then you can read the values and move them to some other table.
    Insert into myParameterTable select * from myExtTable;

  • Insert select statement is taking ages

    Sybase version: Adaptive Server Enterprise/12.5.4/EBF 15432 ESD#8/P/Sun_svr4/OS 5.8/ase1254/2105/64-bit/FBO/Sat Mar 22 14:38:37 2008
    Hi guyz,
    I have a question about the performance of a statement that is very slow and I'd like to have you input.
    I have the SQL statement below that is taking ages to execute and I can't find out how to imporve it
    insert SST_TMP_M_TYPE select M_TYPE from MKT_OP_DBF M join TRN_HDR_DBF T on M.M_ORIGIN_NB=T.M_NB where T.M_LTI_NB=@Nson_lti
    @Nson_lti is the same datatype as T.M_LTI_NB
    M.M_ORIGIN_NB=T.M_NB have the same datatype
    TRN_HDR_DBF has 1424951 rows and indexes on M_LTI_NB and M_NB
    table MKT_OP_DBF has 870305 rows
    table MKT_OP_DBF has an index on M_ORIGIN_NB column
    Statistics for index:                   "MKT_OP_ND7" (nonclustered)
    Index column list:                      "M_ORIGIN_NB"
         Leaf count:                        3087
         Empty leaf page count:             0
         Data page CR count:                410256.0000000000000000
         Index page CR count:               566.0000000000000000
         Data row CR count:                 467979.0000000000000000
         First extent leaf pages:           0
         Leaf row size:                     12.1161512343373872
         Index height:                      2
    The representaion of M_ORIGIN_NB is
    Statistics for column:                  "M_ORIGIN_NB"
    Last update of column statistics:       Mar  9 2015 10:48:57:420AM
         Range cell density:                0.0000034460903826
         Total density:                     0.0053334921767125
         Range selectivity:                 default used (0.33)
         In between selectivity:            default used (0.25)
    Histogram for column:                   "M_ORIGIN_NB"
    Column datatype:                        numeric(10,0)
    Requested step count:                   20
    Actual step count:                      20
         Step     Weight                    Value
            1     0.00000000        <       0
            2     0.07300889        =       0
            3     0.05263098       <=       5025190
            4     0.05263098       <=       9202496
            5     0.05263098       <=       12664456
            6     0.05263098       <=       13129478
            7     0.05263098       <=       13698564
            8     0.05263098       <=       14735554
            9     0.05263098       <=       15168461
           10     0.05263098       <=       15562067
           11     0.05263098       <=       16452862
           12     0.05263098       <=       16909265
           13     0.05263212       <=       17251573
           14     0.05263098       <=       18009609
           15     0.05263098       <=       18207523
           16     0.05263098       <=       18404113
           17     0.05263098       <=       18588398
           18     0.05263098       <=       18793585
           19     0.05263098       <=       18998992
           20     0.03226340       <=       19574408
    If I look at the showplan, I can see indexes on TRN_HDR_DBF are used but now the one on MKT_OP_DBF
    QUERY PLAN FOR STATEMENT 16 (at line 35).
        STEP 1
            The type of query is INSERT.
            The update mode is direct.
            FROM TABLE
                MKT_OP_DBF
                M
            Nested iteration.
            Table Scan.
            Forward scan.
            Positioning at start of table.
            Using I/O Size 32 Kbytes for data pages.
            With LRU Buffer Replacement Strategy for data pages.
            FROM TABLE
                TRN_HDR_DBF
                T
            Nested iteration.
            Index : TRN_HDR_NDX_NB
            Forward scan.
            Positioning by key.
            Keys are:
                M_NB  ASC
            Using I/O Size 4 Kbytes for index leaf pages.
            With LRU Buffer Replacement Strategy for index leaf pages.
            Using I/O Size 4 Kbytes for data pages.
            With LRU Buffer Replacement Strategy for data pages.
            TO TABLE
                SST_TMP_M_TYPE
            Using I/O Size 4 Kbytes for data pages.
    I was expecting the query to use the index also on MKT_OP_DBF
    Thanks for your advices
    Simon

    The total density number for the MKT_OP_DBF.M_ORIGIN_NB column don't look very good:
         Range cell density:           0.0000034460903826
         Total density:                0.0053334921767125
    Notice the total density value is 3 magnitudes larger than the range density ... which can indicate a largish number of duplicates.  (NOTE: This wide difference between range cell and total density can be referred to as 'skew' - more on this later.)
    Do some M_ORIGIN_NB values have a large number of duplicates?  What does the following query return:
    =====================
    select top 30 M_ORIGIN_NB, count(*)
    from   MKT_OP_DBF
    group by M_ORIGIN_NB
    order by 2 desc, 1
    =====================
    The total density can be used to estimate the number of rows expected for a join (eg, TRN_HDR_DBF --> MKT_OP_DBF).  The largish total density number, when thrown into the optimizer's calculations, may be causing the optimizer to think that the volume of *possible* joins will be more expensive than a join in the opposite direction (MKT_OP_DBF --> TRN_HDR_DBF) which in turn means (as Jeff's pointed out) that you end up table scanning MKT_OP_DBF (as the outer table) because of no SARGs.
    From your description it sounds like you've got the necessary indexes to support a TRN_HDR_DBF --> MKT_OP_DBF join order. (Though it wouldn't hurt to see the complete output from sp_helpindex for both tables just to make sure we're on the same sheet of music.)
    Without more details (eg, complete stats for both tables, sp_help for both tables - if you decide to post these I'd recommend posting them as a *.txt attachment).
    I'm assuming you *know* that a join from TRN_NDR_DBF --> MKT_OP_DBF should be much quicker than what you're currently seeing.  If this is the case, I'd probably want to start with:
    =====================
    exec sp_modifystats MKT_OP_DBF, M_ORIGIN_NB, REMOVE_SKEW_FROM_DENSITY
    go
    exec sp_recompile MKT_OP_DBF
    go
    -- run your query again
    =====================
    By removing the skew from the total density (ie, set total density = range cell density = 0.00000344...) you're telling the optimizer that it can expect a much smaller number of joins for the join order of TRN_HDR_DBF --> MKT_OP_DBF ... and that may be enough for the optimizer to use TRN_HDR_DBF to drive the query.
    NOTE: If sp_modifystats/REMOVE_SKEW_FROM_DENSITY provides the desired join order, keep in mind that you'll need to re-issue this command after each update stats command that modifies the stats on the M_ORIGIN_NB column.  For example, modify your update stats maintenance job to issue sp_modifystats/REMOVE_SKEW_FROM_DENSITY for those special cases where you know it helps query performance.

  • Select statement to insert into a table using a loop

    hi
    create table uploadtab(
    itema varchar2(3),
    xtype varchar2(1),
    salesa number,
    margina number,
    salesb number,
    marginb number,
    salesc number,
    marginc number);
    insert into uploadtab
      (itema, xtype, salesa, margina, salesb, marginb, salesc, marginc)
    values
      ('abc', 'a', 100, .40, 300, .10, 450, .25);
    create table testinsert(itema varchar2(3),
    xtype varchar2(1),
    sales number,
    margin number);what i want to do is create 3 records based on that one in a loop
    so my desired output for testinsert is as follows
    abc  a 100  .40
    abc  a 300  .10
    abc  a 450  .25i don't want to use 3 insert tables if at all possible
    any help would be greatly appreciated
    thanks in advance
    Edited by: DM on Jul 7, 2010 2:22 PM

    Just a question on this
    INSERT INTO testinsert
    ( itema
    , xtype
    , sales
    , margin
    SELECT  itema
    ,       xtype
    ,       DECODE
            ( RN
            , 1,salesa
            , 2,salesb
            , 3,salesc
    ,       DECODE
            ( RN
            , 1,margina
            , 2,marginb
            , 3,marginc
    FROM            uploadtab
    CROSS JOIN      (
                            SELECT ROWNUM RN
                            FROM   dual
                            CONNECT BY LEVEL <= 3
    ;when you put a WHERE before the CROSS JOIN, the error
    ORA-00933:SQL command not properly ended. is displayed.

  • Behaviour of insert ... select statement

    Hi,
    If we use insert ... select statement like
    insert into TableA
    Select
    from TableB;
    and if TableB have 250000 rows then what will be the action for it..
    will all 250000 rows fetched to database buffer and if index scans are performed on it and then all rows inserted into TableA
    or
    it will do it parallel
    We are loading large data and facing performance problems for delaying index scans.
    Just Curious :)
    Rushang.

    it's not a secret. oracle will perform the select, using indexes if it decides to (depends on source table size, stats, optimizer mode, etc, etc). rows may be pulled back to memory be written to temp space (e.g., if you were returning many, many rows which needed to be grouped). the rows will then be inserted.
    so, if you have an index on tableb.col1, then oracle may use the index. or it may do a FTS. either way, it will only select the needed rows to be inserted.
    the insert does not prevent the select from working as it would normally.

  • SELECT - INSERT/UPDATE statement mapping.

    This might be a silly question, but are there any Java libraries that can perform an arbitrary SELECT statement to INSERT/UPDATE statement mapping? Or indeed, is this in fact possible?
    Just to be a little clearer, if we had the statement;
    SELECT [Field1] FROM [Table1]
    It would map down to;
    INSERT INTO [Table1] VALUES [Field1]='foo'
    Please excuse my INSERT syntax if it is incorrect, I hardly ever use these statements.
    The reason that I ask is because I am writing a piece of software that can perform quite complex extractions from databases, and it would be nice to be able to automatically 'reverse' the extraction, and perform updates on the extracted data.
    Unfortunately, because of the architecture and requirements of the application, I can not just update a resultset and call the update() method.
    Thanks for your time.
    Ben

    You can define a Bean or simple class to define a table record type (table structure), defining all fields, the primary key and maybe more.
    then deriving a select or insert or update can be done with ease.

  • How to Insert-Select and Updade base table

    Hi All,
    How is the best way to do this?
    I have a table A that has 1million rows.
    I need to Insert-Select into table B from a Group by on table A,
    and Update all rows in table A that were used into Insert with a flag.
    (select a hundred from A and insert 10 thousands into B)
    What to do?
    1-Update table A with flag before Insert-Select?
    2-Insert-Select group by into table B, before Update?
    3-Another way?
    Thanks in advance,
    Edson

    Either way. But you may find that updating the source flag first and then using that flag as part of your where clause when extracting rows to put in the destination table is a bit faster.
    In any case, you will commit only once after all of the work is done.

  • Disable logging of SELECT statements

    I would like to disable the logging of SELECT statements without disabling the logging of other SQL. From what I can see in the documentation, I am limited to a level of all SQL or no SQL.
    Reasoning: There are 100X as many SELECT statements as INSERT or UPDATE generated by TopLink for my app, and it bloats my log files enormously. I use the INSERT / UPDATE statements for debugging from time to time so I still want to log them, but the SELECTS are never useful.
    Can I define some custom logger class that handles this?
    Thanks,
    Stucco

    You could write your own SessionLog subclass for this. You would need to overwrite the log method and search the SessionLogEntry message for SELECT and ignore the message.

  • Select statement works in 8.1.5  but does not work in 8.1.7 why???????

    I have written a select statement as part of cursor in a
    Database Trigger, the following statement works fine in Oracle
    8i ver 8.1.5.0 but the same statement returns error PLS-00103
    (Parser related error) in Oracle 8i version 8.1.7.0!!!!!!
    The error shown is
    PLS-00103Encountered the symbol")" when expecting one of the
    following:
    from
    The Sql statement in a Database Trigger is as follows
    select opn_stk,iss_qty,rec_qty,ROWID
    from mnth_bal
    where to_date(lpad(to_char(month),2,'0')||to_char
    (year),'mmyyyy') >= SYSDATE AND
    REV_NO = :NEW.REV_NO AND
         ITEM_STOCK_ITEM_CD = :NEW.ITEM_CD and
    unit_cd = :NEW.unit_cd and
    store_cd = :new.store_cd
    order by to_date(lpad(to_char(month),2,'0')
    ||to_char(year),'mmyyyy') ;
    Can any one please tell me whats the problem ? In 8.1.5.0 the
    same statement in the trigger runs absolutely fine &
    compiles.Because of this problem the trigger is not compiling in
    8.1.7.0
    Would be grateful for any solutions given

    Relating line numbers in compile errors to source code.
    Oracle does not count blank lines in line numbers. Commenting
    with "--" blank lines or deleting blank lines will make the line
    counter more accurate.
    Oracle counts comments delimited with /*...*/ as one line, no
    matter how many lines are in the source code. Using only "--"
    will make the line counter more accurate.
    Oracle does not count lines before the PL/SQL block DECLARE when
    compiling triggers. (CREATE ... TRIGGER....FOR EACH...WHEN(...))
    parts of the syntax are not counted. The line in the source
    code with the DECLARE of the trigger's PL/SQL block will be line
    1.
    Hint: Avoid uncertainty. Use "SHO ERR TRIGGER <trigger_name>"
    when looking at compile problems.
    Good Luck....

  • Fetching a value from a select statement inside select clause

    hello all,
    I have a problem executing a procedure it gives me a runtime error, what am doing is i have multiple select statement inside a select clause. am using the entire select statement for a ref cursor. when running the query seperately am able to get the records but it's not getting compiled.
    here is the piece of code which am working with
    create or replace procedure cosd_telecommute_procedure
    (p_from_date in date,
    p_to_date in date,
    p_rset in out sys_refcursor)
    as
    p_str varchar2(10000);
    begin
    p_str := 'select personnum, '||
    'fullname, '||
                   'personid, '||
         'hours, '||
         'applydtm, '||
    'paycodeid, '||
         'laborlev2nm, '||
         'laborlev3nm, '||
         'laborlev2dsc, '||
    'adjdate, '||
         'timeshtitemtypeid, '||
         '(select max(eff_dt) from cosd_telecommute_info_tbl '||
         'where emplid = cosd_vp_telecommute.personnum and '||
    'eff_dt between ''01-aug-2005'' and ''30-aug-2005'') thisyreffdt, '||
                   '(select elig_config7 from cosd_telecommute_info_tbl '||
                   'where emplid = cosd_vp_telecommute.personnum and '||
    'eff_dt = (select max(eff_dt) from cosd_telecommute_info_tbl '||
                   'where emplid = cosd_vp_telecommute.personnum and '||
    'eff_dt between ''01-aug-2005'' and ''30-aug-2005'')) thisyrmiles,'||
    '(select max(eff_dt) from cosd_telecommute_info_tbl '||
                   'where emplid = cosd_vp_telecommute.personnum and '||
    'eff_dt between trunc(p_from_date) and trunc(p_to_date)) fiscalyreffdt, '||
         '(select elig_config7 from cosd_telecommute_info_tbl '||
                   'where emplid = cosd_vp_telecommute.personnum and '||
    'eff_dt = (select max(eff_dt) from cosd_telecommute_info_tbl '||
                        'where emplid = cosd_vp_telecommute.personnum and '||
    'eff_dt between trunc(p_from_date) and trunc(p_to_date))) fiscalyrmiles '||
    'from cosd_vp_telecommute '||
    'where trunc(applydtm) '||
    'between p_from_date and p_to_date '||
                   'and personnum = ''029791''';
                   open p_rset for p_str;
    end cosd_telecommute_procedure;
    and here is the piece of error am getting
    ERROR at line 1:
    ORA-00904: invalid column name
    ORA-06512: at "TKCSOWNER.COSD_TELECOMMUTE_PROCEDURE", line 40
    ORA-06512: at line 5

    Did you run the query in SQL plus? Check whether all the column are valid in your database. Below is the query which i got from your procedure just run and check it out. It is really hard for us to tell you the problem without knowing the table structure
    select personnum, fullname, personid, hours, applydtm, paycodeid, laborlev2nm, laborlev3nm,
    laborlev2dsc, adjdate, timeshtitemtypeid,
    (select max(eff_dt) from cosd_telecommute_info_tbl
      where emplid = cosd_vp_telecommute.personnum
      and eff_dt between '01-aug-2005' and '30-aug-2005') thisyreffdt,
      (select elig_config7 from cosd_telecommute_info_tbl where emplid = cosd_vp_telecommute.personnum
      and eff_dt = (select max(eff_dt)
                    from cosd_telecommute_info_tbl
                    where emplid = cosd_vp_telecommute.personnum
                    and eff_dt between '01-aug-2005' and '30-aug-2005'
       ) thisyrmiles,
      (select max(eff_dt)
       from cosd_telecommute_info_tbl
       where emplid = cosd_vp_telecommute.personnum
       and eff_dt between trunc(p_from_date) and trunc(p_to_date)
       ) fiscalyreffdt,
       (select elig_config7 from cosd_telecommute_info_tbl where emplid = cosd_vp_telecommute.personnum
       and eff_dt = (select max(eff_dt)
                     from cosd_telecommute_info_tbl
                     where emplid = cosd_vp_telecommute.personnum
                     and eff_dt between trunc(p_from_date) and trunc(p_to_date)
       ) fiscalyrmiles
    from cosd_vp_telecommute
    where trunc(applydtm) between p_from_date and p_to_date
    and personnum = '029791'

  • Number of rows inserted is different in bulk insert using select statement

    I am facing a problem in bulk insert using SELECT statement.
    My sql statement is like below.
    strQuery :='INSERT INTO TAB3
    (SELECT t1.c1,t2.c2
    FROM TAB1 t1, TAB2 t2
    WHERE t1.c1 = t2.c1
    AND t1.c3 between 10 and 15 AND)' ....... some other conditions.
    EXECUTE IMMEDIATE strQuery ;
    These SQL statements are inside a procedure. And this procedure is called from C#.
    The number of rows returned by the "SELECT" query is 70.
    On the very first time call of this procedure, the number rows inserted using strQuery is *70*.
    But in the next time call (in the same transaction) of the procedure, the number rows inserted is only *50*.
    And further if we are repeating calling this procedure, it will insert sometimes 70 or 50 etc. It is showing some inconsistency.
    On my initial analysis it is found that, the default optimizer is "ALL_ROWS". When i changed the optimizer mode to "rule", this issue is not coming.
    Anybody faced these kind of issues?
    Can anyone tell what would be the reason of this issue..? any other work around for this...?
    I am using Oracle 10g R2 version.
    Edited by: user13339527 on Jun 29, 2010 3:55 AM
    Edited by: user13339527 on Jun 29, 2010 3:56 AM

    You have very likely concurrent transactions on the database:
    >
    By default, Oracle Database permits concurrently running transactions to modify, add, or delete rows in the same table, and in the same data block. Changes made by one transaction are not seen by another concurrent transaction until the transaction that made the changes commits.
    >
    If you want to make sure that the same query always retrieves the same rows in a given transaction you need to use transaction isolation level serializable instead of read committed which is the default in Oracle.
    Please read http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10471/adfns_sqlproc.htm#ADFNS00204.
    You can try to run your test with:
    set  transaction isolation level  serializable;If the problem is not solved, you need to search possible Oracle bugs on My Oracle Support with keywords
    like:
    wrong results 10.2Edited by: P. Forstmann on 29 juin 2010 13:46

  • Insert using select statement

    Hi,
    I am trying to insert values using select statement. But this is not working
    INSERT INTO contribution_temp_upgrade
    (PRO_ID,
    OBJECT_NAME,
    DELIVERY_DATE,
    MODULE_NAME,
    INDUSTRY_CATERGORIZATION,
    ADVANTAGES,
    REUSE_DETAILS)
    VALUES
    SELECT
    :P1_PROJECTS,
    wwv_flow.g_f08(vRow),
    wwv_flow.g_f09(vRow),
    wwv_flow.g_f10(vRow),
    wwv_flow.g_f11(vRow),
    wwv_flow.g_f12(vRow),
    wwv_flow.g_f13(vRow)
    FROM DUAL;
    Please let me know what i am missing..
    Thanks
    Sudhir

    Try this
    INSERT INTO contribution_temp_upgrade
    (PRO_ID,
    OBJECT_NAME,
    DELIVERY_DATE,
    MODULE_NAME,
    INDUSTRY_CATERGORIZATION,
    ADVANTAGES,
    REUSE_DETAILS)
    SELECT
    :P1_PROJECTS,
    wwv_flow.g_f08(vRow),
    wwv_flow.g_f09(vRow),
    wwv_flow.g_f10(vRow),
    wwv_flow.g_f11(vRow),
    wwv_flow.g_f12(vRow),
    wwv_flow.g_f13(vRow)
    FROM DUAL;Note: when you are selecting a value using select statement, you should not specify the keyword "values".
    i assume you have already assigned value for your bind variable :P1_PROJECTS and rest of the functions will return some value.
    Regards,
    Prazy

  • How to insert variable value using select statement - Oracle function

    Hi,
    I have a function which inserts record on basis of some condition
    INSERT INTO Case
    Case_ID,
    Case_Status,
    Closure_Code,
    Closure_Date
    SELECT newCaseID,
    caseStatus,
    Closure_Code,
    Closure_Date,
    FROM Case
    WHERE Case_ID = caseID
    Now i want new casestatus value in place of select statement caseStatus value. I have a variable m_caseStatus and i want to use the value of this variable in above select statement.
    how can i use this.
    thanks

    Hi,
    I have a function which inserts record on basis of some condition
    INSERT INTO Case
    Case_ID,
    Case_Status,
    Closure_Code,
    Closure_Date
    SELECT newCaseID,
    caseStatus,
    Closure_Code,
    Closure_Date,
    FROM Case
    WHERE Case_ID = caseID
    Now i want new casestatus value in place of select statement caseStatus value. I have a variable m_caseStatus and i want to use the value of this variable in above select statement.
    how can i use this. Do not select Case_Status from inner select, so null will be inserted then after inserting it update the case status with m_caseStatus.
    Regards.

  • Select query and Insert statement performance

    Hi all,
    Can anyone plz guide us on below problem I am facing ?
    1) One of the simple Insert statement runs very slow..What might be the reason? Its simple table without any LOBs ,LONG or so. Everything else in the DB works fine.
    2) one of the SELECT statement runs very slow. It selects all records (around 1000) from a table..How can i improve its performance?
    3)Which columns in the Master and its detail tables should be indexed to improve Query performance on them.
    Many Thanks
    Regards
    sandeep

    To get an answer to your questions you have to post some informations about your system:
    1. operating system
    2. RAM
    3. oracle version
    4. init.ora
    Thomas

Maybe you are looking for

  • BOM Explosion in Subcontracting Purchase Order (Item Category L)

    Hello everybody, can u tell me which alternative of a BOM will be choosen during the BOM Explosion in a Subcontracting PO (Item Cat. L) - or where the corresponding customizing or master data settings can be done? many thanks, Gregor

  • Undisplayed Physical query SQL Server 2008

    Hi, I am querrig an SQL Server 2008 database with OBIEE. But the physical query syntax isn't dispalayed in the nqQuery. I got a message seems to History unfound. Is there any way to get the physical query on database different from oracle like SQL se

  • Mandatory Folio Panel update (earlier than 11.4.3) coming on January 26th.

    On Thursday January 26th Adobe will be releasing a hot fix. As part of this deployment we are going to force users of certain legacy Folio Builder panels to update their panel to the latest version. Users with panels older than the 1.1.8 installer (F

  • Beginning Public Site Manager

    Where is the link to request the Public Site Manager? Is it supposed to be on the right side when you log in as an admin? We are not yet listed in the iTunes U "Store" so I submitted a inclusion request last Wednesday (pre-approval status), which is

  • PP costing Formula for  work center

    Hello Experts I am having issue to set up formula for given below case, I have on my work center three parameters, x time (unit: Hr) and   Y Power (unit:kw). i want to multiply both the parameters and get the answer in terms of energy Z  (unit: kwh)