Best practices with sequences and primary keys

We have a table of system logs that has a column called created_date. We also have a UI that displays these logs ordered by created_date. Sometimes, two rows have the exact same created_date down to the millisecond and are displayed in the UI in the wrong order. The suggestion was to order by primary key instead since the application uses an oracle sequence to insert records so the order of the primary key will be chronological. I felt this may be a bad idea as a best practice since the primary key should not be used to guarantee chronological order although in this particular application's case, since it is not a multi-threaded environment, it will work so we are proceeding with it.
The value for the created_date is NOT set at the database level (as sysdate) but rather by the application when it creates the object which is persisted by Hibernate. In a multi-threaded environment, thread A could create the object and then get blocked by thread B which is able to create the object and persist it with key N after which control returns to thread A it persists it with key N+1. In this scenario thread A has an earlier timestamp but a larger key so will be ordered before thread B which is in error.
I like to think of primary keys as solely something to be used for referential purposes at the database level rather than inferring application level meaning (like the larger the key the more recent the record etc.). What do you guys think? Am I being too rigorous in my views here? Or perhaps I am even mistaken in how I interpret this?

>
I think the chronological order of records should be using a timestamp (i.e. "order by created_date desc" etc.)
>
Not that old MYTH again! That has been busted so many times it's hard to believe anyone still wants to try to do that.
Times are in chronological order: t1 is earlier (SYSDATE-wise) than t2 which is earlier than t3, etc.
1. at time t1 session 1 does an insert of ONE record and provides SYSDATE in the INSERT statement (or using a trigger).
2. at time t3 session 2 does an insert of ONE record and provides SYSDATE
(which now has a value LATER than the value used by session 1) in the INSERT statement.
3. at time t5 session 2 COMMITs.
4. at time t7 session 1 COMMITs.
Tell us: which row was added FIRST?
If you extract data at time t4 you won't see ANY of those rows above since none were committed.
If you extract data at time t6 you will only see session 2 rows that were committed at time t5.
For example if you extract data at 2:01pm for the period 1pm thru 1:59pm and session 1 does an INSERT at 1:55pm but does not COMMIT until 2:05pm your extract will NOT include that data.
Even worse - your next extract wll pull data for 2pm thru 2:59pm and that extract will NOT include that data either since the SYSDATE value in the rows are 1:55pm.
The crux of the problem is that the SYSDATE value stored in the row is determined BEFORE the row is committed but the only values that can be queried are the ones that exist AFTER the row is committed.
About the best you, the user (i.e. not ORACLE the superuser), can do is to
1. create the table with ROWDEPENDENCIES
2. force delayed-block cleanout prior to selecting data
3. use ORA_ROWSCN to determine the order that rows were inserted or modified
As luck would have it there is a thread discussing just that in the Database - General forum here:
ORA_ROWSCN keeps increasing without any DML

Similar Messages

  • BC4J Custom API: Batch Validation with Sequence-based primary key

    Hi people,
    I am trying to create a BC4J Custom API using the Batch Validation feature of iSetup Framework. However, my entity object has a sequence-based primary key, and this key is carried to the View Object. This way, i have three attributes marked as key attributes in the VO: InvoiceTypeId (the sequence), OrganizationId and InvoiceTypeCode (The alternate, developer key). the primary key is marked AZ_EXPORTABLE=FALSE, because it must be rebuilt in the target using the alternate key.
    I was able to successfully extract a single row to XML using this API (i am testing locally). When i try to import this XML file containing a single row, i get the following exception. Is this feature supported in Batch Validation mode?
    Regards
    Thiago Souza
    ** Exception starts **
    Started import...
    An exception occurred in API 'CLL Invoice Types'.
    oracle.apps.fnd.framework.OAException: An exception occurred in API 'CLL Invoice Types'.
         at oracle.apps.az.fwk.BEUtil.wrapperException(BEUtil.java:395)
         at oracle.apps.az.fwk.server.BEImport.populateTempTableForBatchValidation(BEImport.java:1927)
         at oracle.apps.az.fwk.server.BEImport.importXML(BEImport.java:371)
         at oracle.apps.az.fwk.server.BEApplicationModuleImpl.importFromXML(BEApplicationModuleImpl.java:404)
         at R12APITester.importFile(R12APITester.java:205)
         at R12APITester.importFile(R12APITester.java:180)
         at R12APITester.main(R12APITester.java:65)
    ## Detail 0 ##
    oracle.apps.fnd.framework.OAException: java.sql.SQLException: ORA-06550: line 32, column 29:
    PL/SQL: ORA-00904: "KEY31": invalid identifier
    ORA-06550: linha 32, coluna 1:
    PL/SQL: SQL Statement ignored
    ORA-06550: linha 33, coluna 29:
    PL/SQL: ORA-00904: "KEY32": invalid identifier
    ORA-06550: linha 33, coluna 1:
    PL/SQL: SQL Statement ignored
    ORA-06550: linha 34, coluna 29:
    PL/SQL: ORA-00904: "KEY33": invalid identifier
    ORA-06550: linha 34, coluna 1:
    PL/SQL: SQL Statement ignored
    ORA-06550: linha 35, coluna 29:
    PL/SQL: ORA-00904: "KEY34": invalid identifier
    ORA-06550: linha 35, coluna 1:
    PL/SQL: SQL Statement ignored
    ORA-06550: linha 36, coluna 29:
    PL/SQL: ORA-00904: "KEY35": invalid identifier
    ORA-06550: linha 36, coluna 1:
    PL/SQL: SQL Statement ignored
    ORA-06550: linha 37, coluna 29:
    PL/SQL: ORA-00904: "KEY36": invalid identifier
    ORA-06550: linha 37, coluna 1:
    PL/SQL: SQL Statement ignored
    ORA-06550: linha 38, coluna 29:
    PL/SQL: ORA-00904: "KEY37": invalid identifier
    ORA-06550: linha 38, coluna 1:
    PL/SQL: SQL Statement ignored
    ORA-06550: linha 39, coluna 29:
    PL/SQL: ORA-00904: "KEY38": invalid identifier
    ORA-06550: linha 39, coluna 1:
    PL/SQL: SQL Statement ignored
    ORA-06550: linha 40, coluna 29:
    PL/SQL: ORA-00904: "KEY39": invalid identifier
    ORA-06550: linha 40, coluna 1:
    PL/SQL: SQL Statement ignored
    ORA-06550: linha 41, coluna 29:
    PL/SQL: ORA-00904: "KEY40": invalid identifier
    ORA-06550: linha 41, coluna 1:
    PL/SQL: SQL Statement ignored
         at oracle.apps.fnd.framework.OAException.wrapperException(Unknown Source)
         at oracle.apps.az.fwk.server.BEValidationXMLParser.executeSql(BEValidationXMLParser.java:288)
         at oracle.apps.az.fwk.server.BEValidationXMLParser.collectUKFKValues(BEValidationXMLParser.java:254)
         at oracle.apps.az.fwk.server.BEImport.populateTempTableForBatchValidation(BEImport.java:1897)
         at oracle.apps.az.fwk.server.BEImport.importXML(BEImport.java:371)
         at oracle.apps.az.fwk.server.BEApplicationModuleImpl.importFromXML(BEApplicationModuleImpl.java:404)
         at R12APITester.importFile(R12APITester.java:205)
         at R12APITester.importFile(R12APITester.java:180)
         at R12APITester.main(R12APITester.java:65)

    Hi Thiago,
    I would suggest to test your API first with row by row validation mode where you would be resolving the foreign and primary keys as specified in the framework. This would help you to understand the framework better and once it starts working for you and then you can try with batch validation mode.
    Thanks
    Mugunthan.

  • Problem with foreign and primary keys migration from SQL Server to Oracle

    Hi folks, i'm using SQL Developer to migrate from a SQL Server database to Oracle and i'm stuck with a couple issues:
    The worst of them so far is the fact that i can't migrate any of the PKs and FKs. After successfully capturing the SQL Server DB model and converting it to Oracle, when the tool generates the scripts, all ALTER TABLE queries that add the PKs and FKs have their target columns duplicated.
    for example: when i'm trying to migrate a simple table that contains an Id (PK) and Name columns, the tool generates the following scripts:
    PROMPT Creating Table TestTable...
    CREATE TABLE TestTable (
    Id NUMBER(10,0) NOT NULL,
    Name VARCHAR2 NOT NULL
    PROMPT Creating Primary Key Constraint PK_TestTable on table TestTable ...
    ALTER TABLE TestTable
    ADD CONSTRAINT PK_TestTable PRIMARY KEY
    Id,
    Id
    ENABLE
    As for the FKs, the tool duplicates the columns as well:
    ALTER TABLE SomeTable
    ADD CONSTRAINT FK_SomeTable_SomeTable2 FOREIGN KEY
    SomeTable2Id,
    SomeTable2Id
    REFERENCES SomeTable2
    Id,
    Id
    ENABLE
    Does anyone have a clue on how to solve these issues? I'd be greatly thankful for any answers!

    Hi Fernando,
    I was unable to replicate this issue. My primary / foreign keys where defined using unique columns.
    PROMPT Creating Primary Key Constraint PK_Suppliers on table Suppliers ...
    ALTER TABLE Suppliers
    ADD CONSTRAINT PK_Suppliers PRIMARY KEY
    SupplierID
    ENABLE
    I tried a few things like
    capturing twice and renaming both models the same
    renaming the converted models
    but with no luck.
    I think this issue is occuring either at the capture or convert phase.
    1) Are you performing the capture online or offline?
    2) Can you provide a the entire DDL for one of these tables and its indexes to see if I can replicate?
    3) Did the capture or convert fail or have to be redone at any stage ?
    I all else fails I would attempt a capture and convert again using a brand new repository (create a new schema in Oracle and associate the migration repository with it).
    Regards,
    Dermot
    SQL Developer Team
    Edited by: Dermot ONeill on Oct 22, 2009 12:18 PM

  • Static NAT refresh and best practice with inside and DMZ

    I've been out of the firewall game for a while and now have been re-tasked with some configuration, both updating ASA's to 8.4 and making some new services avaiable. So I've dug into refreshing my knowledge of NAT operation and have a question based on best practice and would like a sanity check.
    This is a very basic, I apologize in advance. I just need the cobwebs dusted off.
    The scenario is this: If I have an SQL server on an inside network that a DMZ host needs access to, is it best to present the inside (SQL server in this example) IP via static to the DMZ or the DMZ (SQL client in this example) with static to the inside?
    I think its to present the higher security resource into the lower security network. For example, when a service from the DMZ is made available to the outside/public, the real IP from the higher security interface is mapped to the lower.
    So I would think the same would apply to the inside/DMZ, making 'static (inside,dmz)' the 'proper' method for the pre 8.3 and this for 8.3 and up:
    object network insideSQLIP
    host xx.xx.xx.xx
    nat (inside,dmz) static yy.yy.yy.yy
    Am I on the right track?

    Hello Rgnelson,
    It is not related to the security level of the zone, instead, it is how should the behavior be, what I mean is, for
    nat (inside,dmz) static yy.yy.yy.yy
    - Any traffic hitting translated address yy.yy.yy.yy on the dmz zone should be re-directed to the host xx.xx.xx.xx on the inside interface.
    - Traffic initiated from the real host xx.xx.xx.xx should be translated to yy.yy.yy.yy if the hosts accesses any resources on the DMZ Interface.
    If you reverse it to (dmz,inside) the behavior will be reversed as well, so If you need to translate the address from the DMZ interface going to the inside interface you should use the (dmz,inside).
    For your case I would say what is common, since the server is in the INSIDE zone, you should configure
    object network insideSQLIP
    host xx.xx.xx.xx
    nat (inside,dmz) static yy.yy.yy.yy
    At this time, users from the DMZ zone will be able to access the server using the yy.yy.yy.yy IP Address.
    HTH
    AMatahen

  • Best Practice with States and lots of code lines

    Hi.
    This is my first application in flex.
    I'm ok with as3.
    Now, in as3 we were 'forced' to work mostly with external classes so hardly we have a unique code page with lots of lines.
    In flex, using States leads to build codes with lots of line IF we think on states as web site pages.
    I'm not sure it I understand it right. You mean: if an user visits a website built with 10 pages, but the users access only 2 pages, all that 8 remaining pages would have to be download to the swf the user loads?  (this is, considering the usage os states as pages)
    I'm building a system where the user logs to use it.
    2 states at now:  login page and home page.
    I access the db, and get the user and password with this event dispatched from the db.result (this works, however i found it too-old-style looping. Is there a better way, of course, which?)
    protected function usersService_resultHandler(event:ResultEvent):void
                    allUsers = event.result as ArrayCollection;
                    for (var i:uint;i<allUsers.length;i++){
                        if(allUsers[i].user == tx_user.text && allUsers[i].password == tx_password.text)
                            currentState = "home";
                        else
                            Alert.show("Fault", "Login");
    While I have start to build the "home" page/state, I realized that my code would dramatcally increase. Is it the best practice? Do I have to call another url after login (to open a Session - please, some Session tutorials in flex)? Or I keep doing all in states? I'm afraid my swf would grow bigger.
    Thanks

    Ok.
    The problem is that I'm not used to PHP, and I have generated the code to deal with the server automatically via Flex.
    However I could add a new function, and I could guess how to catch values in the db to compare.
    its a frankenstein function, but i'm afraid it also works. By now, there is no way to know whether user mistyped password or username.
    public function getUserVerification($user, $pass) {
            $stmt = mysqli_prepare($this->connection, "SELECT user, password FROM $this->tablename where user=? AND password=?");
            $this->throwExceptionOnError();
            mysqli_stmt_bind_param($stmt, 'ss', $user, $pass);       
            $this->throwExceptionOnError();
            mysqli_stmt_execute($stmt);
            $this->throwExceptionOnError();
            mysqli_stmt_bind_result($stmt, $row->user, $row->password);
            if(mysqli_stmt_fetch($stmt)) {
              return 1;
            } else {
              return 0;
    Also I had to update the _Super_UsersService.as  Class flex had generated before when I first created a php code to deal with db.
    Finaly, I had to assign return and input types for the new function I've created.
    Amazing... it works.
    Now, when pressing the submit button on the login, flex sends user and password so php compare them instead looping it in a Array.
    Also, I have made all this code inside a "loginView" component. So my main app is clean again.
    I guess I understand the idea of using components and reusing them as many as possible. I just have to get used to how to access a component value from outside and vice-versa.
    Now, the creationPolicy is something I would look for. This might be interesting.
    Thanks a lot.
    Btp~

  • Best practice with LR5 and film scanners

    Currently I'm trying out an OpticFilm 8200i Ai, but I'm looking for any suggestions on any scanner.
    I'm primarily scanning old 35mm B&W negatives, color negatives and positives for archival and print. I'm primarily Mac-based using the latest OS10.9 iteration.
    I was looking at the Lightroom product manager last week and could have bothered him then, but I didn't think to ask.
    As it is, most discussions aroe over a year and a half old so it would be good to have an update. Most workflows include using some archane scanner software solution. I'm thinking that the most elegant solution is using Lightroom Capture.
    What do you think?
    RAW

    Teledyol wrote:
    The arcane software I'm using is Silverfast. It has many features, but one would expect all of this capacity to be in Lightroom or Photoshop.
    Photo and painting programs like Photoshop & Lightroom (and their competitors) have traditionally not had direct scanning features. A lot of people believe that they have "scanned into Photoshop" in the past, but what was always really happening was that they were scanning using a plug-in that was not provided with Photoshop.
    Because Silverfast has a range of scanning features that Lightroom will never have, Silverfast and Lightroom can be a great team. I also use the Lightroom "watch folder" method described above, with VueScan, and it makes the process about as seamless as if Lightroom was driving the scanner itself. Just have Silverfast batch-scan in the background and have it dump scan after scan into the watch folder, and you can stay in Lightroom and do your fast batch processing there as Lightroom picks up the scans in the folder it's watching. It's actually pretty easy and makes the need for direct Lightroom scanning unnecessary.
    It isn't likely that Lightroom will add scanner support because scanning is not a high-growth area of photography. If they were going to devote engineering time into figuring out how to drive a frustratingly diverse range of hardware, they would put those resources into improving the support for tethered digital cameras. Even if Lightroom were to scanner support, it is unlikely that it would be at the level of Silverfast, which has spent years becoming one of the best. Let Silverfast handle the very specialized task of driving the scanner hardware, and let Lightroom handle the post-scan image processing.

  • Help with sys_guid and Primary Keys

    I am trying to use sys_guid to create unique PK_ID in my tables (see below). This works fine in oracle, but when I try to insert into a remote DB I get a precision error. Is there a way to control the length/format of sys_guid? Is there something easier to use for automaticall creating PK Id's
    CREATE TABLE TEST
    ( PK_ID NUMBER default to_number(sys_guid(),'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')

    This works fine in oracle, but when I try to insert into a remote DB I get a precision error.What sort of "remote DB"? Oracle or non-Oracle?
    Another thing occur. SYS_GUID includes some element generated by the host or the thread process - does a straightforward SYS_GUID call (i.e. without the TO_NUMBER bit) work in your set-up?
    Cheers, APC
    Blog : http://radiofreetooting.blogspot.com/

  • Best practice for PK and indexes?

    Dear All,
    What is the best practice for making Primary Key and indexes? Should we keep them in the same tablespace as table or should we create a seperate tableapce for all indexes and Primary Key? Please note I am talking about a table that has 21milion rows at the moment and increasing 10k to 20k rows daily. This table is also heavily involved in daily reports and causing slow performance. Currently the complete table with all associated objects such as indexes and PK is stored in one seperate tablespace. If my way is right then please advise me how can I improve the performance of retrival or DML operation on this table?
    Thanks in advance..
    Zia Shareef

    Well, thanks for valueable advices... I am using Oracle 8i and let me tell you exact problem...
    My billing database has two major tables having almost 21 millions rows each... one has collection data and other one for invoices... many reports are showing the data with the joining of Customer + Collection + Invoices tables.
    There are 5 common fields in between invoices(reading) and collection tables
    YEAR, MONTH, AREA_CODE, CONS_CODE, BILL_TYPE(adtl)
    My one of batch process has following update and it is VERY VERY SLOW:
    UPDATE reading r
    SET bamount (SELECT sum(camount)
    FROM collection cl
    WHERE r.ryear = cl.byear
    AND r.rmonth = cl.bmonth
    AND r.area_code = cl.area_code
    AND r.cons_code = cl.cons_code
    AND r.adtl = cl.adtl)
    WHERE area_code = 1
    tentatively area_code(1) is having 20,000 consumers
    each consuemr may have 72 invoices and against these invoices it may have 200 rows in collection tables (system have provision to record partial payment against one invoice)
    NOTE: Please note presently my process is based on cursors so the above query runs for one consumer at one time but just for giving an idea I have made it for whole area.
    Mr. Yingkuan, can you please tell me how can I check that the table' statistics is not current and how can I make it current. Is it really effect performance?

  • DNS best practices for hub and spoke AD Architecture?

    I have an Active Directory Forest with a forest root such as joe.co and the root domain of the same name, and root DNS servers (Domain Controllers) dns1.joe.co and dns2.joe.co
    I have child domains with names in the form region1.joe.com, region2.joe.co and so on, with dns servers dns1.region1.joe.co and so on.
    Each region has distribute offices that may have a DC in them, servers named in the form dns1branch1.region1.joe.co
    Over all my DNS tests out okay, but I want to get the general guidelines for setting up new DCs correct.
    Configuration:
    Root DC/DNS server dns1.joe.co adapter settings points DNS to itself, then two other root domain DNS/DCs dns2.joe.co and dns3.joe.co.
    The other root domain DNS/DCs adapter settings point to root server dns1.joe.co and then to itself dns2.joe.co, and then 127.0.0.1
    The regional domains have a root dns server dns1.region1.joe.co with adapter that that points to root server dns1.joe.co then to itself.
    The additional region domain DNS/DCs adapter settings point to dns1.region1.joe.co then to itself then to dn1.joe.co
    What would you do to correct this topology (and settings) or improve it?
    Thanks in advance
    just david

    Hi,
    According to your description, my understanding is that you need suggestion about your DNS topology.
    In theory, there is no obvious problem. Except for the namespace and server plaining for DNS, zone is also needed to consideration. If you place DNS server on each domain and subdomain, confirm that if the traffic browsed by DNS will affect the network performance.
    Besides, fault tolerance and security are also necessary.
    We usually recommend that:
    DC with DNS should point to another DNS server as primary and itself as secondary or tertiary. It should not point to self as primary due to various DNS islanding and performance issues that can occur. And when referencing a DNS server on itself, a DNS client
    should always use a loopback address and not a real IP address. detailed information you may reference:
    What is Microsoft's best practice for where and how many DNS servers exist? What about for configuring DNS client settings on DC’s and members?
    http://blogs.technet.com/b/askds/archive/2010/07/17/friday-mail-sack-saturday-edition.aspx#dnsbest
    How To Split and Migrate Child Domain DNS Records To a Dedicated DNS Zone
    http://blogs.technet.com/b/askpfeplat/archive/2013/12/02/how-to-split-and-migrate-child-domain-dns-records-to-a-dedicated-dns-zone.aspx
    Best Regards,
    Eve Wang
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Dynamic SQL Joining between tables and Primary keys being configured within master tables

    Team , Thanks for your help in advance !
    I'm looking out to code a dynamic SQL which should refer Master tables for table names and Primary keys and then Join for insertion into target tables .
    EG:
    INSERT INTO HUB.dbo.lp_order
    SELECT *
    FROM del.dbo.lp_order t1
    where not exists ( select *
    from hub.dbo.lp_order tw
    where t1.order_id = t2.order_id )
    SET @rows = @@ROWCOUNT
    PRINT 'Table: lp_order; Inserted Records: '+ Cast(@rows AS VARCHAR)
    -- Please note Databse names are going to remain the same but table names and join conditions on keys
    -- should vary for each table(s) being configured in master tables
    Sample of Master configuration tables with table info and PK Info :
    Table Info         
    Table_info_ID    Table_Name    
    1        lp_order    
    7        lp__transition_record    
    Table_PK_Info        
    Table_PK_Info_ID    Table_info_ID    PK_Column_Name
    2                1    order_id
    8                7    transition_record_id
    There can be more than one join condition for each table
    Thanks you !
    Rajkumar Yelugu

    Hi Rajkumar,
    It is glad to hear that you figured the question out by yourself.
    There's a flaw with your while loop in your sample code, just in case you hadn't noticed that, please see below.
    --In this case, it goes to infinite loop
    DECLARE @T TABLE(ID INT)
    INSERT INTO @T VALUES(1),(3),(2)
    DECLARE @ID INT
    SELECT @ID = MIN(ID) FROM @T
    WHILE @ID IS NOT NULL
    PRINT @ID
    SELECT @ID =ID FROM @T WHERE ID > @ID
    So a cursor would be the appropriate option in your case, please reference below.
    DECLARE @Table_Info TABLE
    Table_info_ID INT,
    Table_Name VARCHAR(99)
    INSERT INTO @Table_Info VALUES(1,'lp_order'),(7,'lp__transition_record');
    DECLARE @Table_PK_Info TABLE
    Table_PK_Info_ID INT,
    Table_info_ID INT,
    PK_Column_Name VARCHAR(99)
    INSERT INTO @Table_PK_Info VALUES(2,1,'order_id'),(8,7,'transition_record_id'),(3,1,'order_id2')
    DECLARE @SQL NVarchar(MAX),
    @ID INT,
    @Table_Name VARCHAR(20),
    @whereCondition VARCHAR(99)
    DECLARE cur_Tabel_Info CURSOR
    FOR SELECT Table_info_ID,Table_Name FROM @Table_Info
    OPEN cur_Tabel_Info
    FETCH NEXT FROM cur_Tabel_Info
    INTO @ID, @Table_Name
    WHILE @@FETCH_STATUS = 0
    BEGIN
    SELECT @whereCondition =ISNULL(@whereCondition+' AND ','') +'t1.'+PK_Column_Name+'='+'t2.'+PK_Column_Name FROM @Table_PK_Info WHERE Table_info_ID=@ID
    SET @SQL = 'INSERT INTO hub.dbo.'+@Table_Name+'
    SELECT * FROM del.dbo.'+@Table_Name+' AS T1
    WHERE NOT EXISTS (
    SELECT *
    FROM hub.dbo.'+@Table_Name+' AS T2
    WHERE '+@whereCondition+')'
    SELECT @SQL
    --EXEC(@SQL)
    SET @whereCondition = NULL
    FETCH NEXT FROM cur_Tabel_Info
    INTO @ID, @Table_Name
    END
    Supposing you had noticed and fixed the flaw, your answer sharing is always welcome.
    If you have any question, feel free to let me know.
    Eric Zhang
    TechNet Community Support

  • Best Practice for Planning and BI

    What's the best practice for Planning and BI infrastructure - set up combined on one box or separate? What are the factors to consider?
    Thanks in advance..

    There is no way that question could be answered with the information that has been provided.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Best Practice regarding using and implementing the pref.txt file

    Hi All,
    I would like to start a post regarding what is Best Practice in using and implementing the pref.txt file. We have reached a stage where we are about to go live with Discoverer Viewer, and I am interested to know what others have encountered or done to with their pref.txt file and viewer look and feel..
    Have any of you been able to add additional lines into the file, please share ;-)
    Look forward to your replies.
    Lance

    Hi Lance
    Wow, what a question and the simple answer is - it depends. It depends on whether you want to do the query predictor, whether you want to increase the timeouts for users and lists of values, whether you want to have the Plus available items and Selected items panes displayed by default, and so on.
    Typically, most organizations go with the defaults with the exception that you might want to consider turning off the query predictor. That predictor is usually a pain in the neck and most companies turn it off, thus increasing query performance.
    Do you have a copy of my Discoverer 10g Handbook? If so, take a look at pages 785 to 799 where I discuss in detail all of the preferences and their impact.
    I hope this helps
    Best wishes
    Michael Armstrong-Smith
    URL: http://learndiscoverer.com
    Blog: http://learndiscoverer.blogspot.com

  • SAP BO Dashboards 4.1 best practice on layout and components

    Dear SCN,
    I have requirement to create a BO 4.1 dashboard with data & Visualization based on a excel sheet which is currently in use as a Mgmt dashboard. The current excel dashboard is having more than 100 KPIs in one view which is readable only if you put in on a slide and view it in full screen by running a slideshow.
    Question 1:
    1. Being the suggested size of the Xcelsius canvas not more than 1024 X 768 so that it is viewable with out scroll bar in BI launchpad or in any browser or in pdf, I am trying to confirm in this forum that the canvas size of 1024 X 768 is the recommended maximum size for the dashboard to get the clear view in any browser/BI launchpad . Pls confirm as it will help me in doing the design for the KPIs and its visualization.
    Question 2:
    1. I am using the BICS connection and accessing the source data from BW. Because the no. of KPIs are more and ranging between 10 cubes and 40 queries as the data is across different modules, I would like to know what is the recommended no. of connections for queries /cubes in dashboard using BICS connectivity which does not affect the performance
    2. For the same dashboard using BICS connection, What is ideal number of components like Charts/Scorecard/Spreadsheet table that is recommended to use to ensure better performance?
    I appreciate your answers which can help the finalization of the dashboard design for this dashboard of data and visualization requirements which is very high when compared to the normal dashboards.
    Thanks and Regards
    Jana

    Hi Suman,
    Thanks for your answers.You answers and links which you have attached are helpful and It answered my questions related to canvas size and Connections.
    I am expecting some benchmark numbers as per the best practices with respect to the No. of components to be used to ensure the better loading of the dashboard. As the increase in number of components increase the size of the dashboard and also it requires more time to load the data for the components, I am looking for the number as per the best practice by considering the below points.
    1. When I say the no. of components, I am not considering the components like label, text box,combo box or list box. I am considering the components which is used for visualization and interactive drill down on top of the visualized charts ( For Eg. Column charts, Pie charts, Gauges ).
    2.I am not going to use more calculations/formulas in my dashboards as the values and structure are almost the same with the BEx query.
    3.Having around 10 to 12 connections.
    4.The data sets are not more than 900 rows totally. For any control, we will be binding only 100 rows at the max as the data for the KPIs are summarized at the year/month level at the BW layer.
    Since the KPIs are more, the Visualizations are more and we can't re-use the Visualization charts for most of the KPIs. Currently I am ending up with ~35 charts/ gauges along with other label and selection controls which I will be using to show 100 KPIs with unique visualization requirements and I am going for the tab-wise layout with more dynamic to accommodate and separate logically.
    Hope these details will give clear picture of why I am looking for the Benchmark on No. of components .
    I appreciate your help!
    Thanks and Regards
    Jana

  • Unique and primary key

    column with unique constraint + not null constraint = primary key! (to some extent) Is it correct?
    I invite your ideas

    http://www.techonthenet.com/oracle/unique.php
    http://www.allapplabs.com/interview_questions/db_interview_questions.htm#q13
    Difference between Unique key and Primary key(other than normal difference)

  • SAP Business One Best-Practice System Setup and Sizing

    <b>SAP Business One Best-Practice System Setup and Sizing</b>
    Get recommendations from SAP and hardware specialists on system setup and sizing
    SAP Business One is a single, affordable, and easy-to-implement solution that integrates the entire business across financials, sales, customers, and operations. With SAP Business One, small businesses can streamline their operations, get instant and complete information, and accelerate profitable growth. SAP Business One is designed for companies with less than 100 employees, less than $75 million in annual revenue, and between 1 and 30 system users, referred to as the SAP Business One sweet spot. The sweet spot covers various industries and micro-verticals which have different requirements when it comes to the use of SAP Business One.
    One of the initial steps during the installation and implementation of SAP Business One is the definition of the system landscape and architecture. Numerous factors affect the system landscape that needs to be created to efficiently run SAP Business One.
    The <a href="http://wiki.sdn.sap.com/wiki/display/B1/BestPractiseSystemSetupand+Sizing">SAP Business One Best-Practice System Setup and Sizing Wiki</a> provides recommendations on how to size and configure the system landscape and architecture for SAP Business One based on best practices.

    For such high volume licenses, you may contact the SAP Local Product Experts.
    You may get their contact info from this site
    [https://websmp209.sap-ag.de/~sapidb/011000358700001455542004#India]

Maybe you are looking for

  • Open URL in a target div

    I need some actionscript help. I have a DIV with an iframe inserted into it on my webpage. I am trying to get my flash button to open a URL within that iframe without refreshing the entire page. Is there a way to accomplish this? Here are some detail

  • Safari cannot open the page ~ The error was: "There was a problem communicating with the web proxy server (HTTP)

    Help!  I was cruzing along just fine and went out tonight only to receive the message above: Cannot open Page Safari cannot open the page The error was: "There was a problem communicating with the web proxy server (HTTP)." I have had all the Apple iP

  • Invoke super class constructor of super class' parent class

    I would like to invoke a constructor of a super class that is the parent of the direct super class. For instance: class C extends class B, and class B extends class A. From class C, is it possible to invoke class A's constructor without first invokin

  • How to import data into EXCELSIUS ?

    Hi experts & community, I'm new in using EXCELSIUS ENGAGE and have the following problem, not sure if there's a solution based on a tool like EXCELSIUS: I collected a set of personal data within an MS-EXCEL sheet call it a "PoolDB". This PoolDB consi

  • Data filter when migrating Oracle to SQL

    How can I use a data filter (via t-sql or ??) when using SSMA to migrate data from Oracle to SQL Server ?