Designer Database

I am new to Designer so may have some stupid questions. Be patient with me :)
I am attempting to reverse engineer our oracle applications into Designer. I want to be able to use Designer to document our existing systems and create a metadata registry. I feel Designer is the best tool to do this.
Any good tutorials in Oracle or elsewhere for using Designer?
Any recommended books for using Designer 10g ?
Any tips and tricks?
Also, I have reverse engineered a couple of apps and have created a db instance under each app. The instance is being repeated across the application system folders. I am wondering if I should be doing this differently where I have the instance is created in one app system folder and shared across the applications that reside on that instance.
Thanks in advance for any help.
Bruce Carson.

Start here:
http://www.oracle.com/technology/products/designer/documentation.html
and Oracle Designer 6i Tutorial:
http://download-uk.oracle.com/otn_hosted_doc/designer/misc/276931/dsgnr_tuttitle_65.htm
Although not 10g the concepts are the same and it will help get you started.
If you have access to metalink.oracle.com you'll also find a lot of useful Support articles there for Designer.

Similar Messages

  • Design Database

    What are the different approaches to solve this problem?
    Design database tables to mimic hierarchical file system.  (e.g. drives, directories, files, include common properties like name, read-only, size, created date). 
    a)      Name the database “Interview_FirstnameLastName”.
    b)      Design should support files in root of drive or any number of levels down.  (e.g.  c:\test.txt, d:\test\text.txt, c:\test\test\text.txt, etc.)
    c)      Create a script to insert test data into tables.  Should have at least 25 files.  Alternatively, create process to take a base path and populate the database by traversing a directory tree.
    d)      Provide scripts to create tables, relationships, indexes or a backup of the database. 
    e. Write a stored procedure (usp_FileSearch) that takes 1 text parameter (@criteria).  It should return a  data containing all the files defined whose filename contains @criteria.
      Return file ID, full path, file name, read-only, size, created date)

    Hi,
    840992 wrote:
    I am using Oracle 10.2
    I was trying to create 3 tables with below columns and constraints.  I wanted to know if these tables work, then I  will use the CONNECT BY queries to get the data
      tblDrive (DriveID,Name)   -   DriveID is the Primary Key
      tblFolders -  (FolderID, Name, FolderLevel, DriveID) -  FolderID is the PK and DriveID is the Forgein Key
      tblFiles (FileID,FileName, FullPath, ReadOnly, Size, CreatedDate, FolderID) -  FileID is PK and FolderIDis the FK
    Once again, I think one table would be a better design, but if you really want to create 3 tables, I'm sure that won't get you an F in the course.  Having 3 tables will make CONNECT BY queries more complicted; you'll need to do UNIONs and/or joins every time.  I would create just one file, that looks pretty much like the Files table (as mentioned above, starting every table name with tbl isn't a very good idea) you sketched above.  ParentID would be a better name than FolderID.  If some of the columns do not apply to folders or to drives, then simply leave them NULL.  (However, don't ReadOnly and CreatedDate apply to folders as well as files?)
    If you do create 3 tables, you'll need a ParentFolderID column in Folders, since folders can contain other folders.  Also, I would not store FolderLevel or DriveID in the folders table.  In a file system, you should be able to move a folder, and all its sub-folders, from one drive to another, or from one level to another easily.  You don't want to go through the tree changing all those rows every time a folder moves.  For the same reason, I wouldn't store FullPath in the Files table.  You can get it using the SYS_CONNECT_BY_PATH function whenever you need it.

  • Software to design database

    A task has given to me to design database. can any body send me link to download software for designing database.

    Do you mean a data modelling tool?
    What are your other requirements? For Windows? Obviously you don't want to pay for it but do you need a tool for jless than 30 days (in which case you could use an evaluation copy) or for longer (in which case your options are more limited)? Do you wan to be able to generate DDL? Just for Oracle or for other databases too?
    Cheers, APC

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Designing database structure and SSAS Tabular Model cubes

    Hi.
    I need to design a database and SSAS tabular models for my clients but I am confused which way I should implement it.
    Data for all the clients is stored into a single database with unique ClientId for each client, such 15 tables I have under a single database which stores information about all the clients.
    Task is to create a SharePoint Site collection for each client which will display Power View Dashboard by taking data from above database.
    Till now I have created a SSAS Tabular Model for each client, XClientModel, YModelClient  using BIDS and using SQL Queries to extract data for respective clients(select * from Table1 where ClientID="X") and using Power View external connection
    to this model, have created Dashboard and other SharePoint information.
    I am not sure if creating different Model is suitable or I should first separate data for each client into separate database and then create Model based on respective client's database.
    Can Some one highlight pros and cons on using 
    SINGLE database with Multiple Tabular Model (One with Many) AND Separate Database with it's Model(One to One) ?
    This is understandable but just putting it here........Imp Note: Data for X client shouldn't be visible to Y client on SharePoint.
    Please let me know if further information is required.

    Hi Sgms,
    In your description, you said that all the clients information were stored in a single database, now you want to know which method is better, single database with Multiple Tabular Model or separate database with it's Model?
    In your scenario, all the information were stored in a single database, why do you want to separate it or create multiple tabular model? If you create multiple model, then you need change the data source to create PowerView dashboard for each client. As
    per my understanding, you just need create one tabular to load all the information. And then use this model to create PowerView dashboard. Using filter to display the information for each client.
    Reference:
    Lesson 1: Create a New Tabular Model Project
    Filtering, Highlighting, and Slicers in Power View
    Regards,
    Charlie Liao
    If you have any feedback on our support, please click
    here.
    Charlie Liao
    TechNet Community Support

  • Designing Database Tables

    Hi,
    Let's assume we have to develop an application for a bank.
    There are different types of Accounts in a Bank. ex: Fixed Deposit Accounts, Current Accounts, Savings Accounts
    There is also the possibility of adding new account types.
    How should we design the database.
    Have one Table to store common account data and three more tables to store data of 3 account types?
    But I will have to create a new table and amend my code to access new table if I add another type of account.
    Another method I thought of is to use a table to store configuration data.
    AccountConfiguation Table:
    Account Type Column Id Column Desc
    =========== ====== =========
    And add column details for each table.
    Then have one main table to save details of all 3 account types.
    AccountDetails Table
    AccountID Column Id Data
    ========= ========= =====
    Please help me to identify the best method or is there any other way.
    Best Regards,
    Chamal.

    You are in danger of straying into the big mistake that is Entity-Attribute-Value (EAV). You don't want to go there.
    Yes, you'd have to amend your code if you introduced a new account type. Surely you'd have to do that anyway if it had new attributes?
    If there aren't too many account types or too many columns altogether, you could simplify to a single table with all common and type-specific columns. The type-specific columns would be NULL for other types and check constraints would be used to ensure the appropriate columns were used for the account type.

  • Design Database for Reports

    I am using sql 2008 R2 SE. Currently I have separated out our reports databases that are being used for internal purpose. However these report databases are logshipped secondary databases being hosted on secondary server. The reason I separated reports databases
    from the transactional databases is I didnt wanted the queries being used for reports causing blocking and deadlocks with queries being used by our apps.
    However due to growth in data the queries that I used in the rdl files are taking longer time to run. So some reports are taking very long time to load and output the results. And since I am querying against the standby secondary database I cannot add indexes.
    But the query execution plan suggests that adding some indexes would improve the performance by almost 90%. If I add these indexes to the primary database then its degrading performance as its causing slowness in the applications. And the applications and
    the reports built are using different queries.
    I want to take experts opinions on how to implement a better strategy for reports in this scenario. Experts please share your valuable thoughts.

    For the similar scenario, i have implemented SQL Server Transactional replication.
    My Scenario:
    Server A, and the DB_name - XYZ. On XYZ DB needs run reports to avoid blocking and reports performance decided to configure replication on another server and install reports on different server
    Server B, Replicated DB_name Rep_XYZ, Installed SSRS on server B.
    Coming to the Index part - I didn't created any extra indexes on subscribed database. For your case some indexes will cause insert/deletions would be cause delay and overall performance will be degraded. You need to decide the which index are required.
    Thanks,
    Satish Kumar.
    Thanks, Satish Kumar. Please mark as this post as answered if my anser helps you to resolves your issue :)

  • Question about app design - database + session object in JSP

    hi all
    i am studying this application built mostly with servlets and JSPs, and there are several questions i want to ask your guys' opinions about it.
    first of all, i noticed that the application hits database to get/save data very frequently. from one page to another it would save data collected from user to the DB, and retrieve data from DB to display on the next page. it does this a lot. would this decrease the overall performance of the application, i mean a DB hit requires network traffic overhead, wouldn't it be better if all data collected from user are stored in a session object temporarily, and all the data that are displayed on those pages retrieved at the start time? and do one save process in the end. it uses Oracle DB if it makes any difference. should we try to avoid db hit as much as possible?
    my next question is that is it good approach to keep information in session object? even if there is a lot of data to keep?
    another question is the db connection. in a pooled environment - weblogic server, we would use JNDI in the code to get connection from the pool, we use it and close it. when we close a connection with close() method, what really happens? does this connection gets return to the pool or it is being destroyed completely.
    i know this is a lot to ask, i appreciate your help very much. looking forward to seeing some feedback.

    No, I don't have tables of values. I have a java 1.5 enumeration, like for instance:
    public enum VelocityConvention {
       RELATIVISTIC,
       REDSHIFT;
    }and a class Velocity that contains a convention and a value like so:
    public class Velocity {
       public VelocityConvention getConvention() {...}
       public double getValue() {...}
       public void set(VelocityConvention conv, double value) {...}
    }When I persist the Velocity class to the database, I want a field called convention that holds the appropriate value of the enumeration. That much is done how I explained before.
    I want to have a selectOneMenu for setting the convention. Via trial and error, I found that MyFaces wasn't able to automatically convert from a string back to a proper VelocityConvention enum constant. It can, of course, convert from the enum to a string because it just calls toString(). But I need both directions for any UIInput element I use, be it a selectOne, or just a straight inputText.

  • How do design database object

    Please let me know how to designed table,index,views,sysnonyms,sequence
    1. What is considered to design those object from bussiness point of view and how it influnced it?
    2. What are the oracle paremeter need to be considered to design those object and how it benifit/degrade?
    3. Anything which you think it is important?
    4. Do we have any tool to do this JOB?
    Please help me understanding this.
    Thank you.

    Hi,
    go through the following link.It may be helpful to u.
    http://stanford.edu/dept/itss/docs/oracle/10g/appdev.101/b10799/adobjdes.htm
    Thank you

  • Modelling databases in Oracle Designer environment

    Today I had laboratory's which subject was: "Modelling data and designing databases scheme in Oracle Designer environment" and I have to make a report for tommorow. I can't find some answers for questions which our Professor told us to include in the report. I would be very grateful if enyone here can help me.
    1) How looks in SQL code definition of the key of the table if relationship have PRIMARY UID attribute?
    2) Is there any difference in SQL code between references replying to transferable and not transferable relationships
    3) Why some tables have more columns than number of attributes in entities (entities contains the same things that tables)?
    My english isn't perfect, sorry for that. But I hope that I wrote that wuestions correctly.
    Thanks for any help.

    1) a primary UID is a primary key in SQL
    2) No.
    3) OD entity models don't include inherited referential key attributes, whereas the SQL tables (obviously) require them.

  • Database Architecture for OnDemand design

    Hi All,
    We have a custom application and every time for the clients we are giving all our executables and they are installing in their machines.
    Rather than that we want to have 1 application installed in the server at our side and wants to give domains to all the clients to access.
    So that we want to design database like following:
    1)There will be only one schema,all the packages and tables will present in one schema and each client will be distinguished by a column which stores the domain of that client and the customizations should be placed in only 1 set of packages .And will have some archieve tables to remove the data from parent tables once all the transactions are completed.
    2)There will be seperate schemas for tables each client and can have seperate packages for customizations
    3)There will be sepearate schemas for only tables not for packages there will be only 1 set of packages.
    Please advice which one will do good interms of performance and coding for the customizations.
    Thanks

    2)There will be seperate schemas for tables each client and can have seperate packages for customizationsThis seems like it would be the best option.
    Each client should have a seperate schema and seperate packages.
    You don't want multiple clients sharing the same schema.
    It would also be recommended to have seperate tablespaces for each schema/user/temp per client. You don't want to run into any legal issues.
    Keep in mind the following when going to Oracle OnDemand (I have been through some discussions in the past with OOD)
    Once you dump your database and transfer to OOD, you'll no longer have the ability to manage the structure. Setting a good design is very important upfront especially if you want your database to be scalable. Changes being made to database design will be like pulling a tooth. Maintenance will essentially occur when they see fit, but good thing is that it will be designed as HA so any maintenance can be done in a rolling fashion.
    - Wilson
    [http://www.michaelwilsondba.info|http://www.michaelwilsondba.info]

  • Which database design tool

    I know JDeveloper also can design database model(but i don't use it) ,so I want to know which database design tool to use includinglogical data model?

    Hello
    You can use sql datamodeler
    http://www.oracle.com/technology/products/database/datamodeler/index.html
    Regards Erik

  • Database design for DSS

    Hello All
    I have to design database for a DSS.
    It is basically a MIS project which will be used for reporting purposes
    Can anyone please tell me what are the considerations to be made
    while designing for such a system.
    Any documents or web links will be of great help
    Thanks in advance
    Ashwin N.

    The best way to approach a database design is to write a
    specification for the application. Document what processes the help
    desk technicians will do. In the process, identify what pieces of
    information they work with. When you have the complete
    specification written, you can then begin grouping the pieces of
    information they work with together. For example, a ticket may have
    a number, status, priority and a person to whom it is assigned. The
    person to whom the ticket is assigned will have a name, phone
    number, e-mail address and a list of technical skills.
    So in this overly-simplified example, we could have a table
    that contains ticket information, a table that has information
    about technicians and a table of skills. Then ask your self
    questions like "Can one ticket be handled by more than one
    technician?" "Can one technician handle more than one ticket?" Can
    a technician have more than one skill?" In this way, you can begin
    seeing the one-to-one, one-to-many and many-to-many relationships
    that exist.

  • Report in Designer fast, in Viewer extremely slow

    Hi.
    I have a report which connects to a SQL Server backend, calling 3 stored procs which deliver the data needed for the report. However, when I execute the report in the Designer (the web app uses CR 9, but I'm testing it with CR 2008 that came with VS 2008) it takes approx. 20 seconds to return with the data - yes, the query takes rather long...
    When I run our web application and call up the same report, using the same parameters and connected to the same database, the Viewer sits there for about 10 minutes before finally showing the report. I've been trying to determine the cause of this but have come up empty so far.
    The report itself is a fairly simple report: headers, a parameter overview (the report uses parameterized queries), the data, and no subtotals, no subreports, no formulas.
    Why is this taken so long using the Viewer? Apparently it can be fast(er) since the Designer comes within 20 secs WITH the correct data!
    I've tried a couple of things to see if I could determine the cause of the bad performance, but so far I've had no luck in improving performance whatsoever. The only thing left would be redesigning the underlying stored proc, but this is a rather complex stored proc and rewriting it would be no small task.
    Anybody has any idea on what to do next? Our customers are really annoyed by this (which I can understand) since they sometimes need to run this report a couple of times a day...

    Ludek Uher wrote:>
    >
    > Troubleshooting slow performance
    >
    > First thing to do with slow reports would be consulting the article u201COptimizing Reports for the Webu201D. The article can be downloaded from this location:
    >
    > https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/701c069c-271f-2b10-c780-dacbd90b2dd8
    >
    Interesting article. Unfortunately, trying several of the suggestions made, it didn't improve the report's performance. No noticeable difference in either Designer or Viewer.
    >
    > Next, determine where is the performance hit coming from? With Crystal Reports, there are at least four places in your code where slow downs may occur. These are:
    >
    > Report load
    > Connection to the data source
    > Setting of parameters
    > Actual report output, be it to a viewer, export or printer
    >
    This part is not relevant. Loading the report isn't the problem (first query being executed under 0.5 seconds after starting the report); as I'll explain further at the end of this reply.
    > A number of report design issues, report options and old runtimes may affect report performance. Possible report design issues include:
    >
    > u2022 OLE object inserted into a report is not where the report expects it to be. If this is the case, the report will attempt to locate the object, consuming potentially large amounts of time.
    The only OLE object is a picture with the company logo. It is visible in design time though, so I guess that means it is saved with the report?
    > u2022 The subreport option "Re-import when opening" is enabled (right click the subreport(s), choose format subreport, look at the subreport tab). This is a time consuming process and should be used judiciously.
    The report contains no subreports.
    > u2022 Specific printer is set for the report and the printer does not exist. Try the "No printer" option (File | Page setup). Also, see the following resources regarding printers and Crystal reports;
    Tried that. It was set to the Microsoft XPS Document writer, but checking the 'No printer' option only made a slight difference (roughly 0.4 seconds in Designer).
    > u2022 The number of subreports the report contains and in which section the subreports are located will impact report performance. Minimize the number of subreports used, or avoid using subreports if possible. Subreports are reports within a report, and if there is a subreport in a detail section, the subreport will run as many time as there are records, leading to long report processing times. Incorrect use of subreports is often the biggest factor why a report takes a long time to preview.
    As stated before, the report has no subreports.
    > u2022 Use of "Page N of M", or "TotalPageCount". When the special field "Page N of M" or "TotalPageCount" is used on a report, it will have to generate each page of the report before it displays the first page. This will cause the report to take more time to display the first page of the report
    The report DOES use the TotalPageCount and 'Page N of M' fields. But, since the report only consists of 3 pages, of which only 2 contain database-related (read further below) I think this would not be a problem.
    > u2022 Remove unused tables, unused formulas and unused running totals from the report. Even if these objects are not used in a report, the report engine will attempt to evaluate the objects, thus affecting performance.
    > u2022 Suppress unnecessary report sections. Even if a report section is not used, the report engine will attempt to evaluate the section, thus affecting performance.
    > u2022 If summaries are used in the report, use conditional formulas instead of running totals when ever possible.
    > u2022 Whenever possible, limit records through Record selection Formula, not suppression.
    > u2022 Use SQL expressions to convert fields to be used in record selection instead of using formula functions. For example, if you need to concatenate 2 fields together, instead of doing it in a formula, you can create a SQL Expression Field. It will concatenate the fields on the database server, instead of doing in Crystal Reports. SQL Expression Fields are added to the SELECT clause of the SQL Query send to the database.
    > u2022 Using one command table or Stored Procedure or a Table View as the datasource can be faster if you returns only the desired data set.
    > u2022 Perform grouping on the database server. This applies if you only need to return the summary to your report but not the details. It will be faster as less data will be returned to the reports.
    > u2022 Local client as well as server computer processor speed. Crystal Reports generates temp files in order to process the report. The temp files are used to further filter the data when necessary, as well as to group, sort, process formulas, and so on.
    All of the above points become moot if you know the structure of the report:
    3 pages, no subreports, 3 stored procs used, which each return a dataset.
    - Page 1 is just a summary of the parameters used for the report. This page also includes the TotalPageCount  field;
    - Page 2 uses 2 stored procs. The first one returns a dataset consisting of 1 row containing the headings for the columns of the data returned from stored proc 2. There will always be the same number of columns (only their heading will be different depending on the report), and the dataset is simply displayed as is.
    - The data from stored proc 2 is also displayed on Page 2. The stored proc returns a matrix, always the same number of columns, which is displayed as is. All calculations, groupings, etc. are done on the SQL Server;
    - Page 3 uses the third stored proc to display totals for the matrix from the previous page. This dataset too will always have the same number of columns, and all totaling is done on the database server. Just displaying the dataset as is.
    That's it. All heavy processing is done on the server.
    Because of the simplicity of the report I'm baffled as to why it would take so much more time when using the Viewer than from within the Designer.
    > Report options that may also affect report performance:
    >
    > u2022 u201CVerify on First Refreshu201D option (File | Report Options). This option forces the report to verify that no structural changes were made to the database. There may be instance when this is necessary, but once again, the option should be used only if really needed. Often, disabling this option will improve report performance significantly.
    > u2022 u201CVerify Stored Procedure on First Refreshu201D option (File | Report Options). Essentially the same function as above, however this option will only verify stored procedures.
    Hm. Both options WERE selected, and deselecting them caused the report to run approx. 10 seconds slower (from the Designer)...
    >
    >
    > If at all possible, use the latest runtime, be it with a custom application or the Crystal Reports Designer.
    >
    > u2022 The latest updates for the current versions of Crystal reports can be located on the SAP support download page:
    >
    > https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/bobj_download/main.htm
    >
    I've not done that (yet). Mainly because CR 10.5 came with VS2008, so it was easier to test to see if I can expect an improvement regarding my problem. Up till now, I see no improvement... ;-(
    > u2022 Crystal Report version incompatibility with Microsoft Visual Studio .NET. For details of which version of Crystal Reports is supported in which version of VS .NET, see the following wiki:
    >
    > https://wiki.sdn.sap.com/wiki/display/BOBJ/CrystalReportsassemblyversionsandVisualStudio+.NET
    >
    >
    According to that list I'm using a correct version with VS2008. I might consider upgrading it to CR 12, but I'm not sure what I would gain with that. Because I can't exactly determine the cause of the performance problems I can't tell whether upgrading would resolve the issue.
    > Performance hit is on database connection / data retrieval
    >
    > Database fine tuning, which may include the installation of the latest Service Packs for your database must be considered. Other factors affecting data retrieval:
    >
    > u2022 Network traffic
    > u2022 The number of records returned. If a SQL query returns a large number of records, it will take longer to format and display than if was returning a smaller data set. Ensure you only return the necessary data on the report, by creating a Record Selection Formula, or basing your report off a Stored Procedure, or a Command Object that only returns the desired data set.
    The amount of network traffic is extremely minimal. Two datasets (sp 1 and 3) return only 1 row containing 13 columns. The sp 2 returns the same number of columns, and (in this clients case) a dataset of only 22 rows, mainly numeric data!
    > u2022 The amount of time the database server takes to process the SQL query. Crystal Reports send the SQL query to the database, the database process it, and returns the data set to Crystal Reports.
    Ah. Here we get interesting details. I have been monitoring the queries fired using SQL Profiler and found that:
    - ALL queries are executed twice!
    - The 'data' query (sp 2) which takes the largest amount of time is even executed 3 times.
    For example, this is what SQL profiler shows (not the actual trace, but edited for clarity):
    Query                  Start time         Duration (ms)
    sp 1 (headers)      11:39:31.283     13
    sp 2 (data)            11:39:31.330     23953
    sp 3 (totals)          11:39.55.313     1313
    sp 1 (headers)      11:39:56.720     16
    sp 2 (data)            11:39:56.890     24156
    sp 3 (totals)          11:40:21.063     1266
    sp 2 (data)            11:40:22.487     24013
    Note that in this case I didn't trace the queries for the Viewer, but I have done just that last week. For sp2 the values run up to 9462 seconds!!!
    > u2022 Where is the Record Selection evaluated? Ensure your Record Selection Formula can be translated to SQL, so that the data can be filter down to the server. If a selection formula can not be translated into the correct SQL, the data filtering will be done on the local client computer which in most cases will be much slower. One way to check if a formula function is being translated into a SQL is to look at u201CShow SQL Queryu201D in the CR designer (Database -> Show SQL Query). Many Crystal Reports formula functions cannot be translated into SQL because there may not be a standard SQL for it. For example, control structure like IF THEN ELSE cannot be translated into SQL. It will always be evaluated on the client computer. For more information on IF THEN ELSE statements see note number 1214385 in the notes database:
    >
    > https://www.sdn.sap.com/irj/sdn/businessobjects-notes
    >
    Not applicable in this case I'm afraid. All the report does is fetch the datasets from the various stored procs and display them; no additional processing is taking place. Also, no records are selected as this is done using the parameters which are passed on to the stored procs.
    > u2022 Link tables on indexed fields whenever possible. While linking on non indexed fields is possible, it is not recommended.
    Although the stored procs might not be optimal, that is beside the point here. The point is that performance of a report when run from the Designer is acceptable (roughly 30 seconds for this report) but when viewing the same report from the Viewer the performance drops dramatically, into the range of 'becoming unusable'.
    The report has its dataconnection set at runtime, but it is set to the same values it had at design-time (hence the same DB server). I'm running this report connected to a stand-alone SQL Server which is a copy of the production server of my client, I'm the only user of that server, meaning there are no external disturbing factors I have to deal with. And still I'm experiencing the same problems my client has.
    I really need this problem solved. So far, I've not found a single thing to blame for the bad performance, except maybe that queries are executed multiple times by the CrystalReports engine. If it didn't do that, the time required to show the report would drop by approx. 60%.
    ...Charles...

  • Headstartv6 - 10g database

    I'm trying to get an existing application use designer6 upgraded to 10g versions of designer/database. Initially I'm not upgrading designer version - just want to get the application files upgraded.
    I've hit the following roadblock - I cannot successfully compile the qmslib** against the database I'm getting
    database version 10.1.0.2
    forms version 9.0.4.0.19
    library that can be succesfully compiled against ver 8.1.7.3 and 9.2.0.3 cannot be compiled against 10g database
    Qms$formerrors has a reference to hil_message.message_rectype
    PDE-PDS001 Could not resolve reference to <Unknown Program Unit> while loading Procedure Body
    error 905 object package <name> is invalid - but Database package is valid !
    I suppose there's a mismatch between tool set and database
    Is there a patch (database or forms) available ?
    TIA
    Pete

    Hi,
    I have the same problem.
    I want to upgrade the application files ( in developer ) to test them beefore I install the new environment ( without need to install new headstart and designer version ).
    Maybe will somebody now answer...is there a workaround ?
    Thanks in advance,
    Zlatko

Maybe you are looking for

  • How to download the new iPhone OS update directly from a link

    is a link provided by Apple to download the new iPhone OS 3.1.3 file itself ? I want to download it to a computer which I will not be able to connect my iPhone to. I will USB thumb drive the file to my laptop to finish the update at some later point

  • Disk Utility and Time machine partition

    Ran a Disk Utility check on my Time Machine volume (internal drive - and yes I stopped Time Machine first). It Came back with error messages something like this: Verifying volume “Leopard_2” Checking Journaled HFS Plus volume. Checking Extents Overfl

  • Can the Finder "Kind" column text be configured?

    When a Finder folder window is in List-View, a description of each file item is given in the column labeled "Kind."  Is this text description configurable someplace, or is this text somehow hard-coded into the Finder?  If it is configurable somehow (

  • 8500A Plus prints two sided. Each side prints to edge so neither will align with each other

    What setting could I be overlooking? I have a 8500A Plus with the Two-sided Printing Accessory installed. I'm using it as a wireless printer from a Macbook Pro, using ADOBE Acrobat 9, both 100% and fit to page have an alignment issue. I want to prin

  • Combining APEX help with a frame-like TOC html help system (I used DITA)

    Problem: The APEX page-oriented help system is bad at helping users find how to do something. I prefer to use a task-oriented help system for that, with a table of contents that users can browse around in. I like the DITA (Darwin Information Typing A