What is the diffferencebetween primavera contractor and full version p6?

Can someone tell me the difference between the primavera contractor and P6?
Someone told me they are all the same except that with the contractor you can only do up to 500 activities
is that the only difference?
How about file formats etc
Lets say I buy primavera contractor and someone wants a schedule in P6 from me, which tehy will, if I do it with contractor will it be the same file type so they will not notice it if it is less than 500 activities?

Hi there
The only difference is the amount of time you have to use it before you have to ante up and pay the piper.
Well, there is one other difference. Typically, whatever you create using the trial is pre-configured with an expiration date on the project. That means that if you didn't purchase, the content you created will become unusable until you purchase and publish again with the expiration removed.
Cheers... Rick

Similar Messages

  • What is the difference between ojvm and client versions?

    Changing the java vm from client to ojvm result in the following error:
    Errormessage:
    java.lang.UnsatisfiedLinkError: no UniqueC in java.library.path
    Project Settings -> Configurations -> Development -> Runner -> Virtual Machine -> ojvm FAILS
    Project Settings -> Configurations -> Development -> Runner -> Virtual Machine -> ojvm      RUNS OK.
    Project Settings -> Configurations -> Development ->Paths ->Additional Classpath:
    C:\jars\xerces.jar;C:\jars\UniqueC.dll;C:\jars\log4j-1.2.8.jar
    What is the difference between ojvm and client versions? How can I make ojvm to find UniqueC.dll?
    Various version info:
    Output from program:
    java version:1.4.2_01
    java home:C:\programfiler\JAVA\2sdk1.4.2_01\jre
    java vm version:9.0.3.738 cdov
    Taken from JDeveloper Help About:
    Oracle IDE     9.0.3.10.35
    UML Modelers Version     9.0.3.9.4
    Business Components Version     9.0.3.10.7
    java.version     1.3.1_02
    java.vm.name     OJVM Client VM
    java.vm.version     9.0.3.738 o

    However, Adobe offers extra paid services to create PDF or to export PDF to other formats. You are not required to buy them, however.

  • What is the difference between paid and unpaid versions of adobe reader

    What is the difference between paid and unpaid versions of adobe reader

    However, Adobe offers extra paid services to create PDF or to export PDF to other formats. You are not required to buy them, however.

  • What are the differences between mac and pc versions of elements 12

    I am a Mac user and have been disappointed at some of the limitations of elements 11 on my Mac versus what I understand you can do on a pc. Have these issues been corrected for Mac users in elements 12? Thanks

    I have to say that I find it really amusing when people who don't use a specific feature for a software program automatically criticize people who do need and/or use it. I realize that many people have different workflows, and some people use certain features often, whereas other people don't use those features at all. It doesn't mean that the feature should not be in the program. Having said that, I find watched folders to be indispensable for my workflow. In addition to photography, I do a lot of graphic design work. I use Photoshop Elements mainly for organizing all of my image files. I use the full version of Photoshop for most of my photo editing and graphic design work. I build most graphic design projects from the ground up in Photoshop. On the pc, once I initially saved the graphic design file, it would automatically be imported into the organizer for me. Any edits I made to photgraphs and then saved as psd files were also imported automatically into the organizer. As it stands now, I have to remember to (and sometimes hunt around for) any of these files after the fact so that I can import them in to the organizer. With 3 kids running around my house, I am often interrupted while working, and forget to immediately import the files into the organizer. It is a real pain in the neck for me to have to go back and search for files, sometimes weeks after I have created them. I really liked the way it worked on my pc and don't understand why it isn't available on the Mac. Lightroom has a version of the watched folder (though it moves files out of that folder upon import) and a synchronize folders function (though not automatic, and might take a while for a large catalog), both available on the Mac, so I would think that it's not really a Mac limitation. Then again, I'm not a programmer, and I know that something that seems like a small fix to me may require a huge amount of coding to accomplish. I just wish Adobe would tell me if this is NEVER going to happen, so that I can move on and establish a new workflow.

  • What is the official AIR SDK and FLEX version to release for iOS6 ?

    Hi,
    Wanted to know which AIR SDK and FLEX version is fully suporting the iOS6 to submit to Apple.
    I found that AIR SDK 3.4 is supporting iOS6, though the code created for iOS 6 is not working correctly when build from Flash Builder 4.7 AIR SDK 3.4 FLEX SDK 4.6 .
    Thanks
    Regards

    19th...

  • What is the difference between Retail and Academic versions of FCP2

    I was recently offered an Academic version of FCP(6)2 at a lower price than the retail version. I am tempted to get it but I have heard rumors and read posts that say the Academic version is limited to students and academic learning. Also, there will be no way to update the version when updates become available for FCP7. I would need to purchase the whole suite again. Is this true?
    I am not a student and would be using the version for part time income. I just thought a lower price than retail would be a good find. Please let me know.

    You are correct. You should be a student or teacher. And you can't upgrade it.
    Patrick

  • 1)Now I use Lightrom 5.7 how to upgrade to 6 or CC? 2) What is the difference between 6 and CC vercion? 3) When I used lightromm 3, I could see inEXIF the distance in meters till the object I took, in the later virsions that function disappeared, it is ve

    1)Now I use Lightrom 5.7 how to upgrade to 6 or CC?
    2) What is the difference between 6 and CC version?
    3) When I used lightromm 3, I could see in EXIF the distance in meters till the object I took, in the later virsions that function disappeared, it is very sad  I am stiil waiting and hope that it would be possibble in the new  versions. Or this indication may  possible by setting?

    1)Now I use Lightrom 5.7 how to upgrade to 6 or CC?
    Purchase the standalone upgrade from here: Products
    Download CC version from here: Explore Adobe desktop apps | Adobe Creative Cloud
    2) What is the difference between 6 and CC version?
    See this comparison chart: Compare Lightroom versions | Adobe Photoshop Lightroom CC
    3) When I used lightromm 3, I could see in EXIF the distance in meters till the object I took, in the later virsions that function disappeared, it is very sad  I am stiil waiting and hope that it would be possibble in the new  versions. Or this indication may  possible by setting?
    Rob Cole's ExifMeta plugin displays the Subject Distance field (and much more).  Unfortunately, his Web site appears to be down again.  He used to be very active here, but he hasn't posted in several months.

  • What is the average duration of 1 full SAP life cycle or 1 end-to-end implementation. How long does it take to prepare DEV, QAS and PRD?

    What is the average duration of 1 full SAP life cycle or 1 end-to-end implementation. How long does it take to prepare DEV, QAS and PRD in any company?

    Anand,
    let me start with saying that the question you ask may not help you to determine the duration of your project. As Ryan and others stated the duration of the project is highly dependent on the scope of the solution you are implementing, geographical scope, amount of modifications/enhancements, number of languages, number of users that need to be trained, amount of standard processes customer is able to re-use in the implementation and many other factors (like quality of implementation contractor you will chose and availability of customer and implementors resources). I can probably go on for another couple lines, but I guess you get the idea.
    With that out of the way let's talk about some example implementations that will give you an idea - Ryan did great job outlining what I would call traditional approach above. I have couple examples where customers leveraged innovative deployment strategies that are available today. In particular the project teams leveraged pre-packaged services like RDS, World Template or Best Practices as their baseline solution and they built from there. Second acceleration technique customers now leverage is the deployment in the SAP HANA Enterprise Cloud to accelerate the time to initial setup of the system and thus move from traditional blueprinting to scope validation exercise that further shortens the time. There are other acceleration techniques we see applied in some cases like use of iterative implementation of the delta requirements on top of the baseline solution.
    Let me offer few examples to illustrate what I explained above:
    ERP implementation at Schaidt Innovations with 3 months long deployment of ERP solution using ERP RDS as a baseline (you can view their testimonial here - Schaidt Innovations: SAP ERP on HANA in the cloud - YouTube)
    Customer in Asia with global template deployment that leveraged SAP ERP for Manufacturing with deployment to cloud and 9 countries rollout (Japan, Korea, China, Taiwan, Hong Kong, UK, Germany and US). The initial deployment took 4 months for first country and 2 months for rollout into the additional 8 countries - so total of 6 months. The original plan using traditional approach with full blueprint and heavy configuration was estimated to be more than double of the actual deployment time.
    There are many other examples where customers follow the assemble-to-order delivery model for their project and gain significant benefits doing so. I suggest you to review some of the recordings we did in 2013 about this approach and if you are member of ASUG review the Agile ASAP sessions we did for ASUG PM SIG.
    Link to webinars: SAP A2O Webinar Series Schedule
    Let me know if you have any questions.
    Jan

  • What are the settings for datasource and infopackage for flat file loading

    hI
    Im trying to load the data from flat file to DSO . can anyone tel me what are the settings for datasource and infopackage for flat file loading .
    pls let me know
    regards
    kumar

    Loading of transaction data in BI 7.0:step by step guide on how to load data from a flatfile into the BI 7 system
    Uploading of Transaction data
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( Transaction data )
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to create ODS( Data store object ) or Cube.
    • Specify name fro the ODS or cube and click create
    • From the template window select the required characteristics and key figures and drag and drop it into the DATA FIELD and KEY FIELDS
    • Click Activate.
    • Right click on ODS or Cube and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.
    4. Monitor
    Right Click data targets and select manage and in contents tab select contents to view the loaded data. There are two tables in ODS new table and active table to load data from new table to active table you have to activate after selecting the loaded data . Alternatively monitor icon can be used.
    Loading of master data in BI 7.0:
    For Uploading of master data in BI 7.0
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( master data attributes, text, hierarchies)
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to select Insert Characteristics as info provider
    • Select required info object ( Ex : Employee ID)
    • Under that info object select attributes
    • Right click on attributes and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.

  • What are the things to check after full imp in 10g?

    Dear all,
    Source
    =======
    OS server ==> HP-UX
    Oracle version ==> Oracle 9.2.0.8
    DB Name ==> MSST
    DB total users ==> 320
    Full export in ==> /u02/export/Jan09.dmp
    Tablespaces ==> SC, SC_I, SA, SA_I, PP, PP_I, AC, AC_I, SD
    Destination
    ============
    OS server ==> HP-UX
    DB Name ==> CHDB
    Existg DB users ==> 20
    Oracle version ==> Oracle 10.2.0.4
    copied 9i dump file in ==> /u03/export
    Tablespaces created same as Oracle 9i i.e;
    Tablespaces ==> SC, SC_I, SA, SA_I, PP, PP_I, AC, AC_I, SD
    I had run full import on Destination server i.e; Oracle 10g & following is the command i used:
    imp system/pwd@conn_string file=/u03/export/Jan09.dmp log=/u03/export/Jan09_imp.log full=y ignore=y statistics=none
    Imported successfully with few errors like
    Now my question is
    ===================
    How to check everything is same/similar like Oracle 9i database ( source ), for example ( i used this technique ):
    What are the things to check after full imp in 10g?
    MSST_DB>SHOW USER
    MSST_DB>SA
    MSST_DB>select object_type, count(*) from user_objects group by objects;
    OBJECT_TYPE COUNT(*)
    DATABASE LINK 2
    FUNCTION 23
    INDEX 1795
    LOB 6
    PACKAGE 8
    PACKAGE BODY 8
    PROCEDURE 30
    SEQUENCE 67
    SYNONYM 60
    TABLE 644
    TRIGGER 3
    VIEW 20
    CHDB_DB>SHOW USER
    CHDB_DB>SA
    CHDB_DB>select object_type, count(*) from user_objects group by objects;
    OBJECT_TYPE COUNT(*)
    INDEX 1794
    PROCEDURE 30
    TABLE 644
    TRIGGER 3
    VIEW 20
    FUNCTION 23
    SYNONYM 60
    PACKAGE BODY 8
    SEQUENCE 67
    PACKAGE 8
    LOB 6

    What are the things to check after full imp in 10g?Only public database links which are in use by your these users which you export/import (Only if you did user level export/import and not DB level).
    If you log files does not show error during creation of any object, you don't need to double check any thing. Otherwise do on with the method which you mentioned by counting the objects.
    Or you can create a database link from source to destination and use a query with MINUS to find out if there is any object missing
    select object_name,object_type from user_objects
    MINUS
    select object_name,object_type from user_objects@destination_database;Salman

  • What are the prerequisites to install and configure Oracle Coherence 3.4

    What are the prerequisites to install and configure an Oracle Coherence (3.4 Grid version) on RHEL5.0 system?
    I want to make an Oracle Coherence Gid Data system for testing purpose using 3 machines.
    What kind of network configuration is OK?
    What software should be installed in advance?
    Thank you

    Hi,
    I would read through the Testing and Tuning section of the following page.
    http://coherence.oracle.com/display/COH34UG/Usage+(Full)
    Even though you are not about to go into production I think the Production Checklist has some very useful information.
    http://coherence.oracle.com/display/COH34UG/Production+Checklist
    -Dave

  • What is the shortcut to maximize window (full screen) which is normally done by pressing " -- " like key on the right top corner in macbook pro ?

    What is the shortcut to maximize window (full screen) which is normally done by pressing "<-->" like key on the right top corner in macbook pro ?

    There's no <--> key on ANYones keyboard. (Great answer Shootist )
    The other part of his answer is only applicable to certain windows (browsers specifically) as there is NO system wide maximize command in the manner you are asking.
    Apple's "maximize" function (green +) simply maximizes the window to the content, not the screen.
    For other keyboard shortcuts (and of course they may be outdated) try: this

  • What are the Relations between Journalizing and IKM?

    What is the best method to use in the following scenario:
    I have about 20 source tables with large amount of data.
    I need to create interfaces that join the source tables into target tables.
    The source tables are inserted every few secondes with about hundreds to thousands rows.
    There can be a gap of few seconds between the insert of different tables that sould be joined.
    The source and target tables are on the same Oracle instance and schema.
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?
    In general What are the relations between 'Journalizing' and 'IKM'?
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?
    I want to understand what is the role of 'Journalizing CDC'?
    Can 'IKM - Incremental Update' work without 'Journalizing'?
    Does 'Journalizing' need to have PK on the tables?
    What should i do if i can't put PK (there can be multiple identical rows)?
    Thanks in advance Yael

    Hi Yael,
    I will try and answer as many of your points as I can in one post :-)
    Journalizing is way of tracking only changed data in your source system, if your source tables had a date_modified you could always use this as a filter when scanning for changes rather than CDC, Log based CDC (Asynchronous in ODI, Logminer/Streams or Goldengate for example) removes the overhead of of placing a trigger on the source table to track changes but be aware that it doesnt fully remove the need to scan the source tables, in answer to you question about Primary keys, Oracle CDC with ODI will create an unconditional log group on the columns that you have defined in ODI as your PK, the PK columns are tracked by the database and presented in a Journal table (J$<source_table_name>) this Journal table is joined back to source table via a journalizing view (JV$<source_table_name>) to get the rest of the row (ie none PK columns) - So be aware that when ODI comes around to get all data in the Journalizing view (ie Inserts, Updates and Deletes) the source database performs a join back to the source table. You can negate this by specifying ALL source table columns in your PK in ODI - This forces all columns into the unconditional log group, the journal table etc. - You will need to tweak the JKM to then change the syntax sent to the database when starting the journal - I have done this in the past, using a flexfield in the datastore to toggle 'Full Column' / 'Primary Key Cols' to go into the JKM set up (there are a few Ebusiness suite tables with no primary key so we had to do this) - The only problem with this approach is that with no PK , you need to make sure you only get the 'last' update and in the right order to apply to your target tables, without so , you might process the update before the insert for example, and be out of sync.
    So JKM's provide a mechanism for 'Change data only' to be provided to ODI, if you want to handle deletes in your source table CDC is usefull (otherwise you dont capture the delete with a normal LKM / IKM set up)
    IKM Incremental update can be used with or without JKM's, its for integrating data into your target table, typically it will do a NOT EXISTS or a Minus when loading the integration table (I$<target_table_name>) to ensure you only get 'Changed' rows on the load into the target.
    user604062 wrote:
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?Hopefully I have explained it above, its the type of thing you really need to play around with, and throroughly review the operator logs to see what is actually going on (I think this is a very good guide to setting it up : http://soainfrastructure.blogspot.ie/2009/02/setting-up-oracle-data-integrator-odi.html)
    In general What are the relations between 'Journalizing' and 'IKM'?JKM simply presents (only) changed data to ODI, it removes the need for you to decide 'how' to get the updates and removes the need for costly scans on the source table (full source to target table comparisons, scanning for updates based on last update date etc)
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?Delete and insert into target is fine , but ask yourself how do you identify which rows to process, inserts and updates are generally OK , to spot a delete you need to compare the table in full, target table minus source table = deleted rows , do you want to copy the whole source table every time to perform this ? Are they in the same database ?
    I want to understand what is the role of 'Journalizing CDC'?Its the ODI mechanism for configuring, starting, stopping the change data capture process in the source systems , there are different KM's for seperate technologies and a few to choose for Oracle (Triggers (Synchronous), Streams / Logminer (Asynchronous), Goldengate etc)
    Can 'IKM - Incremental Update' work without 'Journalizing'?Yes of course, Without CDC your process would look something like :
    Source target ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    With CDC your process looks like :
    Source Journal (J$ table with JV$ view) ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    as you can see its the same process after the source table (there is an option in the interface to enable the J$ source , the IKM step changes with CDC as you can use 'Synchronise Journal Deletes'
    Does 'Journalizing' need to have PK on the tables?Yes - at least a logical PK in the datastore, see my reply at the top for reasons why (Log Groups, joining back the J$ table to the source table etc)
    What should i do if i can't put PK (there can be multiple identical rows)? Either talk to the source system people about adding one, or be prepared to change the JKM (and maybe LKM, IKM's) , you can try putting all columns in the PK in ODI. Ask yourself this , if you have 10 identical rows in your source and target tables, and one row gets updated - how can you identify which row in the target table to update ?
    >
    Thanks in advance YaelA lot to take in, as I advised I would reccomend you get a little test area set up and also read the Oracle database documentation on CDC as it covers a lot of the theory that ODI is simply implementing.
    Hope this helps!
    Alastair

  • What is the major advantage of Repair full request?

    Hi gurus
    What is the major advantage of Repair full request?
    Thanks in advance
    Raj

    Full Repair request is used when:
    1. During delta upload you have skipped some records and want them in your data target.
    2. When the delta load was not correct and you make a selective deletion and fetch the data for the same selection without disturbing the delta update.
    3. When at times you fail to have even full upload, then mark the request as repair request and you can get the full upload..(I have faced such a scenario)
    Assign points if useful.
    Regards
    Gajendra

  • What is the difference between tkprof and explainplan

    Hi,
    what is the difference between tkprof and explainplan.

    Execution Plans and the EXPLAIN PLAN Statement
    Before the database server can execute a SQL statement, Oracle must first parse the statement and develop an execution plan. The execution plan is a task list of sorts that decomposes a potentially complex SQL operation into a series of basic data access operations. For example, a query against the dept table might have an execution plan that consists of an index lookup on the deptno index, followed by a table access by ROWID.
    The EXPLAIN PLAN statement allows you to submit a SQL statement to Oracle and have the database prepare the execution plan for the statement without actually executing it. The execution plan is made available to you in the form of rows inserted into a special table called a plan table. You may query the rows in the plan table using ordinary SELECT statements in order to see the steps of the execution plan for the statement you explained. You may keep multiple execution plans in the plan table by assigning each a unique statement_id. Or you may choose to delete the rows from the plan table after you are finished looking at the execution plan. You can also roll back an EXPLAIN PLAN statement in order to remove the execution plan from the plan table.
    The EXPLAIN PLAN statement runs very quickly, even if the statement being explained is a query that might run for hours. This is because the statement is simply parsed and its execution plan saved into the plan table. The actual statement is never executed by EXPLAIN PLAN. Along these same lines, if the statement being explained includes bind variables, the variables never need to actually be bound. The values that would be bound are not relevant since the statement is not actually executed.
    You don’t need any special system privileges in order to use the EXPLAIN PLAN statement. However, you do need to have INSERT privileges on the plan table, and you must have sufficient privileges to execute the statement you are trying to explain. The one difference is that in order to explain a statement that involves views, you must have privileges on all of the tables that make up the view. If you don’t, you’ll get an “ORA-01039: insufficient privileges on underlying objects of the view” error.
    The columns that make up the plan table are as follows:
    Name Null? Type
    STATEMENT_ID VARCHAR2(30)
    TIMESTAMP DATE
    REMARKS VARCHAR2(80)
    OPERATION VARCHAR2(30)
    OPTIONS VARCHAR2(30)
    OBJECT_NODE VARCHAR2(128)
    OBJECT_OWNER VARCHAR2(30)
    OBJECT_NAME VARCHAR2(30)
    OBJECT_INSTANCE NUMBER(38)
    OBJECT_TYPE VARCHAR2(30)
    OPTIMIZER VARCHAR2(255)
    SEARCH_COLUMNS NUMBER
    ID NUMBER(38)
    PARENT_ID NUMBER(38)
    POSITION NUMBER(38)
    COST NUMBER(38)
    CARDINALITY NUMBER(38)
    BYTES NUMBER(38)
    OTHER_TAG VARCHAR2(255)
    PARTITION_START VARCHAR2(255)
    PARTITION_STOP VARCHAR2(255)
    PARTITION_ID NUMBER(38)
    OTHER LONG
    DISTRIBUTION VARCHAR2(30)
    There are other ways to view execution plans besides issuing the EXPLAIN PLAN statement and querying the plan table. SQL*Plus can automatically display an execution plan after each statement is executed. Also, there are many GUI tools available that allow you to click on a SQL statement in the shared pool and view its execution plan. In addition, TKPROF can optionally include execution plans in its reports as well.
    Trace Files and the TKPROF Utility
    TKPROF is a utility that you invoke at the operating system level in order to analyze SQL trace files and generate reports that present the trace information in a readable form. Although the details of how you invoke TKPROF vary from one platform to the next, Oracle Corporation provides TKPROF with all releases of the database and the basic functionality is the same on all platforms.
    The term trace file may be a bit confusing. More recent releases of the database offer a product called Oracle Trace Collection Services. Also, Net8 is capable of generating trace files. SQL trace files are entirely different. SQL trace is a facility that you enable or disable for individual database sessions or for the entire instance as a whole. When SQL trace is enabled for a database session, the Oracle server process handling that session writes detailed information about all database calls and operations to a trace file. Special database events may be set in order to cause Oracle to write even more specific information—such as the values of bind variables—into the trace file.
    SQL trace files are text files that, strictly speaking, are human readable. However, they are extremely verbose, repetitive, and cryptic. For example, if an application opens a cursor and fetches 1000 rows from the cursor one row at a time, there will be over 1000 separate entries in the trace file.
    TKPROF is a program that you invoke at the operating system command prompt in order to reformat the trace file into a format that is much easier to comprehend. Each SQL statement is displayed in the report, along with counts of how many times it was parsed, executed, and fetched. CPU time, elapsed time, logical reads, physical reads, and rows processed are also reported, along with information about recursion level and misses in the library cache. TKPROF can also optionally include the execution plan for each SQL statement in the report, along with counts of how many rows were processed at each step of the execution plan.
    The SQL statements can be listed in a TKPROF report in the order of how much resource they used, if desired. Also, recursive SQL statements issued by the SYS user to manage the data dictionary can be included or excluded, and TKPROF can write SQL statements from the traced session into a spool file.
    How EXPLAIN PLAN and TKPROF Aid in the Application Tuning Process
    EXPLAIN PLAN and TKPROF are valuable tools in the tuning process. Tuning at the application level typically yields the most dramatic results, and these two tools can help with the tuning in many different ways.
    EXPLAIN PLAN and TKPROF allow you to proactively tune an application while it is in development. It is relatively easy to enable SQL trace, run an application in a test environment, run TKPROF on the trace file, and review the output to determine if application or schema changes are called for. EXPLAIN PLAN is handy for evaluating individual SQL statements.
    By reviewing execution plans, you can also validate the scalability of an application. If the database operations are dependent upon full table scans of tables that could grow quite large, then there may be scalability problems ahead. On the other hand, if large tables are accessed via selective indexes, then scalability may not be a problem.
    EXPLAIN PLAN and TKPROF may also be used in an existing production environment in order to zero in on resource intensive operations and get insights into how the code may be optimized. TKPROF can further be used to quantify the resources required by specific database operations or application functions.
    EXPLAIN PLAN is also handy for estimating resource requirements in advance. Suppose you have an ad hoc reporting request against a very large database. Running queries through EXPLAIN PLAN will let you determine in advance if the queries are feasible or if they will be resource intensive and will take unacceptably long to run.

Maybe you are looking for