Database Primary Key Question - Best Practices

I posted this in the ADDT forum, but I imagine I'll get more
responses here:
All you database developers - how do you deal with primary
keys? Do you
ALWAYS use an AutoIncrement/AutoNumber? Or only sometimes? Is
there an
argument to NOT use AutoIncrement? I know how I create
databases and how
I usually do things. I know how a few of my colleagues work.
But how
about the rest of the world? (Research for a MS Access book I
am
involved with.)
Alec
Adobe Community Expert

.oO(Alec)
>I posted this in the ADDT forum, but I imagine I'll get
more responses here:
>All you database developers - how do you deal with
primary keys? Do you
>ALWAYS use an AutoIncrement/AutoNumber?
No.
>Or only sometimes? Is there an
>argument to NOT use AutoIncrement?
AUTO_INCREMENT is a proprietary MySQL feature. For some
people this
might be an argument against it, but doesn't have to. Every
DBMS has its
own special features. You just have to decide whether you
want to keep
your code/queries as portable as possible or want to get the
most out of
your DB. Usually I prefer performance/features over
portability, simply
because for me and my projects it's very unlikely that I have
to change
the DBMS. I've chosen MySQL for good reasons and will stay
with it for
quite a while.
>I know how I create databases and how
>I usually do things. I know how a few of my colleagues
work. But how
>about the rest of the world? (Research for a MS Access
book I am
>involved with.)
It always depends on the table itself, what data it contains,
what I
want to do with it and also some personal preferences. In n:m
tables for
example there's no need for an extra numeric PK, since the
entire record
already is the PK, built from two or more FKs.
But if I need a numeric PK, I usually use sequences. Some
DBMS support
them natively, in MySQL they can be emulated with an extra
table. It
simply means, that the used PK number is generated _before_
the record
itself is inserted. For me and my framework this has some
advantages
(makes the internal work a bit easier), but of course in
other cases an
AUTO_INCREMENT might be more appropriate.
So IMHO there's no general solution. If an AUTO_INCREMENT or
something
similar fits your needs, you should use it. I don't see a
real problem
with that.
Micha

Similar Messages

  • General Oracle Database Performance trouble solving best practice Steps

    We use  Oracle 11g DB on Windows2008R2 as web application backend DB.
    We have peformance trouble in that DB.
    I would like to know General Oracle Database Performance trouble solving best practice Steps.
    Is there any good general best practice document for performace trouble solving in the internet ?

    @Girish Sharma:  I disagree with this. Many people say things like your phrase "..first identify the root cause and then move forward" but that is not the first step. Any such technique is nothing more than looking at some report, finding a number that you don't like, and attempting to "fix" it. Some people use that supposedly funny term "compulsive tuning disorder" (first used by Gaja Krishna Vaidyanatha) to describe this approach (also advocated in this topic by @Supriyo Dey). The first step must be to determine what the problem is. Until you know that, all those reports you mentioned (which, remember, require EE plus pack licences) are useless.
    @teradata0802, your best practice starts by finding the problem. Is it, for example, that the overnight batch jobs don't finish until lunchtime? A screen takes 10 seconds to refresh, and your target is one second? A report takes half an hour, but you need to run it every five minutes? Determine what business function is causing your client to lose money because it is too slow. Then investigate what it is doing, how, and why. You have to begin by focussing on the problem, not by running database-wide reports..

  • Primary key question with PHPmysql database

    I know that updating the primary key is taboo, but I have a a newsletter subscription form set up that works perfectly, but I can't figure out how to create the primary key.  I tried using the subscriber email address (because its unique, but in trying to add an update function I realize that the email address may need to be updated.
    My question is how do I get a primary key that will work when I'm relying on a form variable to create the record and therefore have no control over assigning one?
    Is there a way to have one automatically assigned when a subscriber hits the submit button?

    Bingo.
    I wrote:
    ALTER TABLE email_list  ADD (      id MEDIUMINT NOT NULL AUTO_INCREMENT,      name CHAR(30) NOT NULL,      PRIMARY KEY (id)  )
    I got my primary key column.  thank you so much!!!

  • Data warehousing question/best practices

    I have been given the task of copying a few tables from our production database to a data warehousing database on a once-a-day (overnight) basis. The number of tables will grow over time; currently it is 10. I am interested in not only task success but also best practices. Here's what I've come up with:
    1) drop the table in the destination database.
    2) re-create the destination table from the script provided by SQL Developer when you click on the 'SQL' tab while you're viewing the table.
    3) INSERT INTO the destination table from the source table using a database link. Note: I am not aware of any columns in the tables themselves which could be used to filter added/deleted/modified rows only.
    4) After data import, create primary key and indexes.
    Questions:
    1) SQL Developer included the following lines when generating the table creation script:
    <table creation DDL commands>
    then
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE (INITIAL 251658240 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_PGROW"
    it generated this code snippet for the table, the primary key and every index.
    Is this necessary to include in my code if they are all default values? For example, one of the indexes gets scripted as follows:
    CREATE INDEX "XYZ"."PATIENT_INDEX" ON "XYZ"."PATIENT" ("Patient")
    -- do I need the following four lines?
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 60817408 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_IGROW"
    2) Anyone with advice on best practices for warehousing data like this, I am very willing to learn from your experience.
    Thanks in advance,
    Carl

    I would strongly suggest not dropping and recreating tables every day.
    The simplest option would be to create a materialized view on the destination database that queries the source database and to do a nightly refresh of that materialized view. You could then create a materialized view log on the source table and then do an incremental refresh of the materialized view.
    You can schedule the refresh of the materialized view either in the materialized view definition, as a separate job, or by creating a refresh group and adding one or more materialized views.
    Justin

  • [CS5.5/6] - XML / Data Merge questions & Best practice.

    Fellow Countrymen (and women),
    I work as a graphic designer for a large outlet chain retailer which is constantly growing our base of centers.  This growth has brought a workload that used to be manageable with but two people to a never ending sprint with five.  Much of what we do is print, which is not my forte, but is also generally a disorganized, ad-hoc affair into which I am wading to try to help reduce overall strain.
    Upon picking up InDesign I noted the power of the simple Data Merge function and have added it to our repetoire in mass merging data sources.  There are some critical failures I see in this as a tool going forward for our purposes, however:
    1) Data Merge cannot handle information stored and categorized in a singular column well.  As an example we have centers in many cities, and each center has its own list of specific stores.  Data merge cannot handle a single column, or even multiple column list of these stores very easily and has forced us into some manual operations to concatenate the data into one cell and then, using delimiter characters, find and replace hard returns to seperate them.
    2) Data Merge offers no method of alternate alignment of data, or selection by ranges.  That is to say:  I cannot tell Data merge to start at Cell1 in one column, and in another column select say... Cell 42 as the starting point.
    3) Data merge only accepts data organized in a very specific, and generally inflexible pattern.
    These are just a few limitations.
    ON TO MY ACTUAL DILEMMA aka Convert to XML or not?
    Recently my coworker has suggested we move toward using XML as a repository / delivery system that helps us quickly get data from our SQL database into a usable form in InDesign. 
    I've watched some tutorials on Lynda.com and havent yet seen a clear answer to a very simple question:
    "Can XML help to 'merge' large, dynamic, data sets like a list of 200 stores per center over 40 centers based off of a single template file?"
    What I've seen is that I would need to manually duplicate pages, linking the correct XML entry as I go rather than the program generating a set of merged pages like that from Data Merge with very little effort on my part.  Perhaps setting up a master page would allow for easy drag and drop fields for my XML data?
    I'm not an idiot, I'm simply green with this -- and it's kind of scary because I genuinely want us to proceed forward with the most flexible, reliable, trainable and sustainable solution.  A tall order, I know.  Correct me if I'm wrong, but XML is that beast, no?
    Formatting the XML
    Currently I'm afraid our XML feed for our centers isnt formatted correctly with the current format looking as such:
    <BRANDS>
         <BRAND>
              • BrandID = xxxx
              [Brand Name]
              [Description]
              [WebMoniker]
              <CATEGORIES>
                   <CATEGORY>
                        • xmlns = URL
                        • WebMoniker = category_type
              <STORES>
                   <STORE>
                        • StoreID = ID#
                        • CenterID = ID#
    I dont think this is currently usable because if I wanted to create a list of stores from a particular center, that information is stored as an attribute of the <Store> tag, buried deep within the data, making it impossible to 'drag-n-drop'. 
    Not to mention much of the important data is held in attributes rather than text fields which are children of the tag.
    Im thinking of proposing the following organizational layout:
    <CENTERS>
         <CENTER>
         [Center_name]
         [Center_location]
              <CATEGORIES>
                   <CATEGORY>
                        [Category_Type]
                        <BRANDS>
                             <BRAND>
                                  [Brand_name]
    My thought is that if I have the <CENTER> tag then I can simply drag that into a frame and it will auto populate all of the brands by Category (as organized in the XML) for that center into the frame.
    Why is this important?
    This is used on multiple documents in different layout styles, and since our store list is ever changes as leases end or begin, over 40 centers this becomes a big hairy monster.  We want this to be as automated as possible, but I'd settle for a significant amount of dragging and dropping as long as it is simple and straightforward.  I have a high tollerance for druding through code and creating work arounds but my co-workers do not.  This needs to be a system that is repeatable and understandable and needs to be able to function whether I'm here or not -- Mainly because I would like to step away from the responsibility of setting it up every time
    I'd love to hear your raw, unadulterated thoughts on the subject of Data merge and XML usage to accomplish these sorts of tasks.  What are your best practices and how would you / do you accomplish these operations?
    Regards-
    Robert

    From what I've gleaned through watching Lynda tutorials on the subject is that what I'm hoping to do is indeed possible.
    Peter, I dont disagree with you that there is a steep learning curve for me as the instigator / designer of this method for our team, but in terms of my teammates and end-users that will be softened considerably.  Even so I'm used to steep learning curves and the associated frustrations -- but I cope well with new learning and am self taught in many tools and programs.
    Flow based XML structures:
    It seems as though as long as the initial page is set up correctly using imported XML, individual data records that cascade in a logical fashion can be flowed automatically into new pages.  Basically what you do is to create an XML based layout with the dynamic portion you wish to flow in a single frame, apply paragraph styles to the different tags appropriately and then after deleting unused records, reimport the XML with some specific boxes checked (depending on how you wish to proceed).
    From there simply dragging the data root into the frame will cause overset text as it imports all the XML information into the frame.  Assuming that everything is cascaded correctly using auto-flow will cause new pages to be automatically generated with the tags correctly placed in a similar fashion to datamerge -- but far more powerful and flexible. 
    The issue then again comes down to data organization in the XML file.  In order to use this method the data must be organized in the same order in which it will be displayed.  For example if I had a Lastname field, and a Firstname field in that order, I could not call the Firstname first without faulting the document using the flow method.  I could, however, still drag and drop content from each tag into the frame and it would populate correctly regardless of the order of appearance in the XML.
    Honestly either method would be fantastic for our current set of projects, however the flow method may be particularly useful in jobs that would require more than 40 spreads or simple layouts with huge amounts of data to be merged.

  • ESB DbAdapter primary key question

    Hi,
    I have an ESB process that pushes data from a legacy table into the GL_INTERFACE table in Oracle E-Business Suite.
    The DbAdapter for reading from the legacy table had two primary keys defined. We've recently learned that these two fields do not always uniquely identify a record. If I had two records with the same data in these two fields, they would both be inserted into the destination table, but the data inserted for both records would be the same as that of the first record (seemingly ignoring the second record). I solved this problem by defining a third primary key relationship in the DbAdapter, and that worked fine.
    My question is this: why did both records inserted into the destination table contain the same data?
    Any ideas?
    Thanks in advance.

    The the table in the database doesn't have a primary key defined, but, of course, Oracle's DBAdapter requries primary keys defined. There were actually two records in the database, with the same information in the columns that I defined as primary keys, but other information was different for the two records. It seems as though the ESB used the data from the first record when it encountered the second. Am I correct that this is what happened?

  • Compsite Primary Key question

    Hi guys,
    First of all, I am really newbie in DB deals.
    There are "best practices" about the order/position of the columns of a composite primary key?...
    By instance, I have a table named STOCK:
    CODE ISSUE STATUS etc...
    1020 2007 OK
    1030 2007 OK
    1040 2007 OK
    1020 2008 OK
    1030 2008 NA
    If I want to add a composite key with the columns CODE and ISSUE, it is the same (CODE, ISSUE) than (ISSUE, CODE) key ?... thera are a rule to consider the order of the columns in the composite primary key in big tables?...
    Thanks a lot for you comments...

    Assuming that a composite primary key is appropriate, it shouldn't matter what order you list them unless you have queries that select data based on only one of the two columns.
    I am generally suspicious of self-described newbie's that are creating composite primary keys, though. From a data modeling perspective, that means that any child tables are going to have to have both CODE and ISSUE columns. You have to remember to join on both columns every time. The columns CODE and ISSUE also don't sound like the sort of immutable columns that you would want for a primary key-- writing code to do cascading updates if you ever have to change a primary key value is a pain.
    Justin

  • Mapping ejb and database - primary key problem

    I'm trying to map some ejb's to an existing database. Unfortunately, when I load the db schema into deploytool, the table I'm interested in is not available because it doesn't have any primary key defined.
    Is it impossible to map an ejb to a db table without primary key ?
    Is it a sun-specific feature, or can I expect the same behaviour from other application servers vendors ?
    Thanks for the help.

    Hi Maxime,
    In J2EE RI 1.4, you can't map beans to tables w/o primary keys. I don't know, how other other products treat this issue.
    Regards,
    -- markus.

  • Question - Best practice data source for Vs2008 and Crystal Reports 2008

    I have posted a question here
    CR2008 using data from .NET data provider (ADO.NET DATASET from a .DLL)
    but think that perhaps I need general community advise on best practice with data sources.
    In Crystal reports I can choose the data source location from any number of connection types, eg ado.net(xml), com, oledb, odbc.
    Now in regard to the post, the reports have all been created in Crxi 6.3, upgraded to Crystal XI and now Im using the latest and greatest. I wrote the Crystal Reports 6.3/ XI reports back in the day to do the following: The Reports use a function from COM Object which returns an ADO recordset which is then consumed fine.
    So I don't want to rewrite all these reports, of which there are many.
    I would like to know if any developers are actually using .NET Class libraries to return ADO.NET datasets via the method call or if you are connecting directly to XML data via whatever source ( disk, web service, http request etc).
    I have not been able to eliminate the problem listed in the post mentioned above, which is that the Crystal Report is calling the .NET class library method twice before displaying the data. I have confirmed this by debugging the class lib.
    So any guidance or tips is appreciated.
    Thanks

    This is already being discuss in one of your other threads. Let's close this one out and concentrate on the one I've already replied to.
    Thanks

  • Composite Primary key & Primary key questions

    Background:
    1.5 TB data warehouse. All tables use at least a 3 column composite Primary key. Most use 3. col1||col2||col3 to achieve uniqueness.
    col1= very low cardinality. 99% of the values are '0'
    col2 = high cardinality. (SSN's, loaded monthly)
    col3 = low cardinality. (monthly sequence number used for partitioning)
    From what I've read, the first column used in a composite primary key also receives its own index. If that is correct, then the primary key being used is losing its effectiveness by putting a very low cardinality column as the first column of the Primary key. By moving col2 to the first position of the composite key, I would be obtaining an index on SSN which would be much more useful. (Please correct me if I'm wrong) Or since the index would be so large anyway, the CBO would just do full table scans which is what I'm trying to get away from.
    This Primary key is used to ensure uniqueness during loads. Other than that, it really has no purpose and is never queried by the user. (WHERE col1||col2||col3 = xxxxxxxxx)
    As a DBA, I see this massive tablespace for holding the primary keys now reaching in excess of 157 GB and it seems like such a waste of space. The problem I'm running across is how to ensure uniqueness during the loads w/o having to store the unique values. (the 157 GB) If I drop the PK's, then create unique indexes just for the loads, then the unique index would be generated against the entire table, which would take forever.
    If you create a local index (what I want in the end) it still has to create the index for all previous partitions (250+ partitions). You can't create a specific 'local' index on a partition. You create a local index on the table and when a new partition is added, the index is then added as well. That's what I want to do with some of my fields like SSN but for loading purposes, this isn't going to help me.
    I need a way to ensure uniqueness during a monthly load of a new partition (col1||col2||col3) but w/o having to store these values for eternity. Right now it's the primary key, so it has to be stored. I hope I'm making sense here. Any ideas?

    I think you need some clarification about the index supporting the primary key.
    The primary key is unique - the database will enforce this for you.
    All of the columns in the primary key are in the index that enforces the key (not just the first column).
    Usually this index is a unique index, but you can have a non-unique index supporting a primary key.
    I'm a little confused with some of your other statements
    how to ensure uniqueness during the loads w/o having to store the unique valuesIf you don't store the value how do you know it is unique?
    Did you mean that you don't want to store the key values in both the table and the index?
    You may want to consider making col3 (the partition key) the leading column of the index, so you can create a local (instead of global) PK index.
    I wouldn't worry too much about col1 (the column full of mostly zeros) - numbers are stored variable length. A NUMBER(38) that holds a zero takes no more space than a NUMBER(1) that holds a zero.

  • Composite Primary Key question

    I have a new question regarding composite primary keys on another table I have created.
    I have the following table with the following definition:
    CREATE TABLE "APSOM"."CPULIST"
    ( "CPU_ID" VARCHAR2(10 BYTE),
    "SERVER_ID" VARCHAR2(10 BYTE),
    "CREATED_DATETIME" TIMESTAMP (6),
    "UPDATED_DATETIME" TIMESTAMP (6)
    with the following composite PK definition:
    ALTER TABLE "APSOM"."CPULIST" ADD CONSTRAINT "CPULIST_PK" PRIMARY KEY ("CPU_ID", "SERVER_ID")
    Then, I inserted data in the following way:
    insert into CPULIST
    values ('CPU1', 'P1', SYSDATE, SYSDATE);
    with following CPU ID values from 'CPU1' to 'CPU16' for SERVER ID value 'P1' also inserted as well.
    Now, I am trying to insert values for SERVER ID value 'P2'
    insert into CPULIST
    values ('CPU1', 'P2', SYSDATE, SYSDATE);
    and I am getting the following error message:
    Error starting at line 1 in command:
    insert into CPULIST
    values ('CPU1', 'P2', SYSDATE, SYSDATE)
    Error report:
    SQL Error: ORA-00001: unique constraint (APSOM.XPKCPULIST) violated
    00001. 00000 - "unique constraint (%s.%s) violated"
    *Cause: An UPDATE or INSERT statement attempted to insert a duplicate key.
    For Trusted Oracle configured in DBMS MAC mode, you may see
    this message if a duplicate entry exists at a different level.
    *Action: Either remove the unique restriction or do not insert the key.
    Using the following SQL command:
    select column_name from all_cons_columns
    where constraint_name = 'CPULIST_PK'
    and owner = 'APSOM';
    I get the following records
    1. CPU_ID
    2. SERVER_ID
    This error does not make sense to me. Any help would be appreciated.
    Thanks,
    Patrick Quinn
    Operations
    Turning Point Global Solutions

    So, if there is a hidden unique constraint XPKCPULIST that cannot be pulled the all_cons_column table and I drop the XPXCPULIST constraint with the following SQL:
    alter table CPULIST drop constraint XPKCPULIST;
    Error starting at line 1 in command:
    alter table CPULIST drop constraint XPKCPULIST
    Error report:
    SQL Error: ORA-02443: Cannot drop constraint - nonexistent constraint
    02443. 00000 - "Cannot drop constraint - nonexistent constraint"
    *Cause:    alter table drop constraint <constraint_name>
    *Action:   make sure you supply correct constraint name.
    What can I do to get past this so I can perform the following insert and receive this error message??
    insert into CPULIST
    values ('CPU1', 'P2', SYSTIMESTAMP, SYSTIMESTAMP);
    Error starting at line 1 in command:
    insert into CPULIST
    values ('CPU1', 'P2', SYSTIMESTAMP, SYSTIMESTAMP)
    Error report:
    SQL Error: ORA-00001: unique constraint (APSOM.XPKCPULIST) violated
    00001. 00000 - "unique constraint (%s.%s) violated"
    *Cause:    An UPDATE or INSERT statement attempted to insert a duplicate key.
    For Trusted Oracle configured in DBMS MAC mode, you may see
    this message if a duplicate entry exists at a different level.
    *Action:   Either remove the unique restriction or do not insert the key.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • ISE policy creation question - best practices

    Ok, I am a rookie ISE user here and am trying to learn as I go. I have a 802.1x policy for our corporate users on both wired and wireless and a wireless guest policy that redirects to the guest portal to enter credentials created in the sponsor portal. The corporate user has access to corporate resources and the guest basically has access to just the internet.
    I need to make what I am calling a Vendor policy that is basically a hybrid of the corporate user and the guest user. These would be vendors that are on-site to assist with programming and need access longer than what the guest account can be created for. This would also have specific ACLs that grant them access to the specific resources they would nee. I would like to tie this into AD authentication since they have an AD account created to be able to access those corporate resources in most cases. My first question is do I have a single policy that is tweaked as vendors come and go or do I simply create a specific policy for each vendor? My second question is do I or should I create unique SSIDs for each vendor?
    As I said I am just now getting into getting ISE configured. I am just not sure of what is considered a best practice or what is considered a secure way to may things happen. In regards to the policies I have created, they work but I think I have a couple holes to address.
    Thanks ...
    Brent

    Mostly makes sense. I have the AD part just need to get an AD group created for my test subject.
    I created an Endpoint Identity Group to place the vendors devices into so that we can allow laptop to connect but not phone. Got that.
    I think I can handle the Authorization Profile. It will be something like if VendorAsset and AD1:ExternalGroups Equals VendorADGroup then VendorPermissions. VendorPermissions would be the ACL that limits where they can go. I also need to create a non 802.1x based SSID as well and add this to the Authorization profile but can still be generic enough to be useable by all vendors.
    I think it is my Authentication rules that I need to modify for Vendor as my Corporate based policies use Dot1x and I need a policy that does not use dot1x. Right?

  • Primary Key Question

    Probably a silly question - I'm fairly new to APEX and trying to design a small system. I come from a FORMS background, and would always have had my support tables with character keys e.g. Clubs - Club_code, varchar2(3), Club_name varchar2(30) with values like "ABC" and "CDE" as mnemonic primary keys. APEX seems to want a sequential PK on the tables used - is it feasible to use character PK fields, or better to go with the default functionality?
    Thanks
    Malcolm

    Hi Mal,
    like you I'm not a fan of sequence generated keys on everything, I think this was a design choice made by the Apex team in the early days of Apex (HTMLDB) when the intention was that Apex would be intended primarily for non-technical users. To me its an implementation choice and has nothing to do with relational design theory, and when working with new interfaces to existing systems, it may be a choice the developer does not have. I think it would just be simpler if the Apex developer had that choice when using the wizards.
    For ordinary forms it is not too difficult.
    1. Create a new page as a "Form on a Table with Report". Identify the PK field and set this as populated from an existing trigger.
    2. Edit the form page that was created. Edit the item created for the PK field. Set the "Display As" property under "Name" to "text field".
    3. Set the "Read Only Condition Type" property under "Read Only" to "Value of Item in Expression 1 is NOT NULL". Enter the name of the item in "Expression 1".
    Now when you run the report page, depending whether the table is empty you may see some rows. Regardless, you can click on the create button and it will take you to the form page with the PK text field enterable. You can enter a row of data and click create and you are taken back to the report page. Now when you click the edit link for a row in the report you will see that you are taken to the form page with the PK field is now Read Only. (Correct behaviour as PKs should not be updateable)
    Depending how you specified the link in the report page, you may also want to change the PK column to show the value of the column rather than an edit graphic. This can be done by editing the report page and then editing the Report Attributes of the report region. Edit the PK column from the list of columns and set the Link Text in the Column Link to the column name surrounded by # character. ie #P30_COL_NAME#. This can be quickly achieved by clicking the column name quick link just below.
    Unfortunately there is easy fix with the tabular forms wizard. You either have to create new Multi Row Update processes or possibly use the updateable views and instead of triggers method.
    Regards
    Andre

  • Session question; best practice

    Hi,
    One of our high profile application's queries/updates are served to user sessions. But we wanted to improve user query performance and reduce general database activity.
    This piece of application cause an auto refresh to execute every 60 seconds. These queries execute against order tables looking for statuses on active orders, are user specific, and in some cases are not optimally tuned producing very high database buffer get and disk read activity. On average, 1,500 executions representing various flavors of these queries are executed hourly.
    my questions are:
    1) how can we get max performance ?
    2) can we cache these queries for like every 30 secs ?
    3) how can we cache ? so that user sessions would access the cache.
    -sharma

    well, you could load the data and put it in the application scope (in memory) with a timeout time so that it's not used after however long, in which case, a request would have to go to get the newer data from the DB.

  • Using Database Control in JPDs - Best Practice

    Hi,
    I would like to know the best way of using DB control in a JPD
    like.. which one is better.
    a) Using a Control Send node in workshop and configuring it from the workshop design view as a separate node.
    or
    b) In a perform node call the method on the DB control object.
    like..
    * @common:control
    private com.abc.def.SampleDBCtrl xxxx;
                        xxxx.insertIntoDb(parameter1, parameter2);
    Is there any disadvantages on approach b over approach a or any issues performance-wise.
    I would like to know which is one is better.
    I sincerely appreciate ur advice on this.
    Thanks

    Hello,
    I do not know the source of your problem, but
    a.) Check if the db colum types match the parameter types
    See http://e-docs.bea.com/workshop/docs81/doc/en/workshop/guide/controls/database/conMappingDBFieldsToJavaTypesInTheDBCtrl.html for more info
    b.) command-type is only used for rowset controls.
    -Kai

Maybe you are looking for