Large Survey & Table

Hi,
I am trying to create a survey/form based on a table that contains 100 columns.
I would like to achieve the following:
Page 1 = displays first 10 columns
Page 2 = displays next 15 columns
etc.
Once the user goes through all 100 columns, then the insert should occur.
I tried to use the WIZARD template but I didn't get too far.
I think I remember seeing something on OTN about this, but I don't remember where.
Can anybody help me with this?
Thanks in advance.
VC

Hi Vikas,
When I try to create the sample application using the link, the Wizard will allow for 9 pages with 9 items in each page. I suppose I can add the other items manually, but would like to know if there is any limitation to the number of items you can put on one page?
Thanks
VC

Similar Messages

  • Creating index on large partitioned table

    Is anyone aware of a method for telling how far along is the creation of an index on a large partitioned table? The statement I am executing is like this:
    CREATE INDEX "owner"."new_index"
    ON "owner"."mytable"(col_1, col_2, col_3, col_4)
    PARALLEL 8 NOLOGGING ONLINE LOCAL;
    This is a two-node RAC system on Windows 2003 x64, using ASM. There are more than 500,000,000 rows in the table, and I'd estimate that each row is about 600-1000 bytes in size.
    Thank you.

    you can know the progress from v$session_longops.
    select
    substr(SID ||','||SERIAL# ,1,8) "sid,srl#",
    substr(OPNAME ||'>'||TARGET,1,50) op_target,
    substr(trunc(SOFAR/TOTALWORK*100)||'%',1,5) progress,
    TIME_REMAINING rem,
    ELAPSED_SECONDS elapsed
    from v$session_longops
    where SOFAR!=TOTALWORK
    order by sid;
    hth

  • Select count from large fact tables with bitmap indexes on them

    Hi..
    I have several large fact tables with bitmap indexes on them, and when I do a select count from these tables, I get a different result than when I do a select count, column one from the table, group by column one. I don't have any null values in these columns. Is there a patch or a one-off that can rectify this.
    Thx

    You may have corruption in the index if the queries ...
    Select /*+ full(t) */ count(*) from my_table t
    ... and ...
    Select /*+ index_combine(t my_index) */ count(*) from my_table t;
    ... give different results.
    Look at metalink for patches, and in the meantime drop-and-recreate the indexes or make them unusable then rebuild them.

  • Splitting Large internal tables

    Hi All,
    How to split large internal table into smaller ones of fixed number of lines.
    The total number of lines are not known and is subjected to vary.
    Regards,
    Naba

    I am not sure about your requirements, you try with the below solution
    Itab : contains all entries  let us say 3000
    Itab1 
    itab2
    itab3
    No.of entries to be split based on 3000/n  ( 3000/3)  = 1000
    split_val = 1000
    N_split_from = 1
    N_split_to = 1000
    APPEND LINES OF ITAB1 FROM N_split_from TO N_split TO ITAB1.
    N_split_from = 1 +  split_val
    N_split_to = 1000 + split_val
    APPEND LINES OF ITAB1 FROM N_split_from TO N_split TO ITAB2.
    N_split_from = 1 +  split_val
    N_split_to = 1000 + split_val
    APPEND LINES OF ITAB1 FROM N_split_from TO N_split TO ITAB3.
    Regards
    Sasi

  • How to manage large partitioned table

    Dear all,
    we have a large partitioned table with 126 columns and 380G not indexed can any one tell me how to manage it because now the queries are taking more that 5 days
    looking forward for your reply
    thank you

    Hi,
    You can store partitioned tables in separate tablespaces. This does the following:
    Reduce the possibility of data corruption in multiple partitions
    Back up and recover each partition independently
    Control the mapping of partitions to disk drives (important for balancing I/O load)
    Improve manageability, availability, and performance
    Remeber as the doc states :
    The maximum number of partitions or subpartitions that a table may have is 1024K-1.
    Lastly you can use SQL*Loader and the import and export utilities to load or unload data stored in partitioned tables. These utilities are all partition and subpartition aware.
    Document Reference:
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/partiti.htm
    Adith

  • Dbms_stats fails on larger patitioned tables.

    Hi,
    I need to run dbms_stats on partitioned table with partition as large as 300gb and it fails please help how would i work on a startegy to collect stats.
    Thanks,
    Kp

    "it fails" and "there no errors and just doesnt complete and have to stop" are not the same thing.
    Does it fail ?
    OR
    Does it take long to run (and you haven't waited for it to complete) ?
    With very large Partitioned Tables you need to be sure of the GRANULARITY, DEGREE, METHOD_OPT and CASCADE options of the Gather_Stats that you need to / want to run.
    I think so questions are enough the question is can you answerWe have no idea of the options you use, the type and number of partitons, whether all the partitions have changing data, how many indexes exist, whether indexes are Global or Local, how many PQ operators are available and in use etc.
    So , we cannot answer your question.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Logical sql firing larger aggregate table instead of smaller one

    Hi
    When we process a request containing one particular column alone along with some time dimension say month or year, the logical sql is firing larger aggregare table instead of smaller aggregate table. Please, help us in resolving this issue.
    The OracleBI version we are using is 10.1.3.4.1
    Thanks.

    Hi,
    Try posting in the OLAP forum.
    OLAP
    Thanks, Mark

  • Optimisation of NN lookup for large roads table

    we have a large roads table (approx 2.5M records) for the UK. we have vehicles for which we need to locate the nearest road for approx 1-5k vehicle locations at a time. The roads table has an R-Tree index on it.
    The roads data is in British National Grid - the vehicle locations are in WGS-84
    Using a standard SDO_NN query, this takes approx 250ms per lookup on a twin 1ghz pIII with 1.5gb ram scsi 10k raid 5 disk array. - clearly not a very good result. does anybody have any ideas as how to optimise this search further. I would try using a filter, but when I include this in the single query, it degrades performance. the ideal situation would be to do a filter which is used globally for all the 1-5k vehicle batch, but i'm not sure as to how to do this in java/vb.

    Hi,
    What David says is correct. You should always use the hint
    otherwise the spatial index may not be used and that may return
    an error. However, in your case, it does not return an error.
    So I would guess the spatial index is being used.
    With regard to performance:
    1. If there are multiple neighbors for each road, there are
    multiple candidates to be evaluated and the query will be slow.
    Spatial queries currently are evaluated using the minimum
    resources for memory, etc. so as to work in all situations and
    they do not perform anything fancy with 1G memory or twin processors.
    Parallel query is an item for future releases.
    One thing that could be done in your situation is:
    cache some of the table data (use buffer_pool parameters for table)
    and pose queries within same county or a region one after another
    so that these queries which are likely to have same data as neighbors
    could take advantage of buffering.
    2. In addition to the above, in 9.2 nearest-neighbor performance should
    have improved substantially. This has to do with some internal
    optimizations minimizing the number of table fetches. So in concept,
    this would behave close to filter which does not fetch any geometries
    from the table (only uses the index) and fetches only as many
    candidate geometries as needed assuming a limited memory.
    Without doing any tuning described in 1, you should see improvements.
    3. An alternative is to use filter and sdo_distance.
    Use sdo_filter by creating a buffer around the query geometry and
    identify all data in the buffered_query. Evaluate the sdo_geom.sdo_distance
    between the query geometry and each of the result data and order them
    based on the distance and select the k-nearest. The efficiency of this
    method depends upon the selectivity of the filter operation (with the
    buffered query). If the filter returns nearly 2 or 3 times the number
    of neighbors needed, this method would perform great.
    Hope that helps,
    - Ravi.

  • How to Link VALUEGUID to SurveyGUID (CRM Survey tables)

    Hi Gurus,
    I want to link table CRMD_ORDERADM_H to table CRM_SVY_DB_SVS
    I can go to CRMD_ORDERADM_H to get the Survey GUID. Someone told me that i can go to CRMD_LINK table to get the Value GUID, but when i enter the Survey Guid, the table only returns three records (objtype_set = 07, 30 & 29; NO 58), so, i am lost. I don't know how to link table crmd_orderadm_h with table crm_svy_db_svs.
    Your help will be highly appriviated.
    Thanks.
    Nadeem.

    hi Nadeem,
    What is the end result that u expect ? do u need the survey details or what ? do u want the survey details that is linked to a particular order ?
    Call the FM as follows
    CRM_ORDER_READ
    EXPORTING
        IT_HEADER_GUID = (your order guid)
    IMPORTING
        ET_SURVEY = (survey table)
    The survey table will give u list of surveys (with their Survey GUID's). I would recommend using this FM, as it would take care of buffers etc. which might not be available if u try a direct select on tables.
    Reward points if this helps.
    Regards,
    Raviraj

  • Large editable tables

    I have a datatable with input fields assigned to each column. The user can edit values freely and then submit the changes for update. Because of the size of the list it appears that a huge amount of processing is required even though only a couple of rows are actually modified by the user.
    Is there a "best practices" for handling large editable tables that anyone is aware of?
    Thanks

    Presuming Oracle backend database.....(you didnt say)
    Partitioning would seem a quick (and expensive $$$$) win, if you partition on date, be sure to use the same date key to perform 'partition pruning' , which is to say give the optimizer a chance to throw away all unrequired partitions before it starts going to disk. When you say 'didnt improve performance much' you need to give a partition wise join for the partitioning to become effective at query time - did you do this ? for example a dashboard prompt with a filter on week / month would help (assuming your week / month time dim column is joined on partition key)
    Is your data model a star schema ? Do you have dimension metadata in your database ? I'd be looking at Materialized views and looking for some sort of aggregate rollup query re-write whereby you queries are re-written at run time to the Mview and does not hit your fact table.
    As for caching, OBIEE caching is the last resort after you exhaust every other alternative, its not there to speed up queries and you would be hiding the problem, what when a user drills down or needs to run an ad-hoc report that is not cached ?
    I would start with understanding Oracle execution plans, review your data model (with the view on speedy data extraction) and look at what Oracle DB gives for data warehousing appliances :
    parallelism is your friend!
    star transformation,
    Mviews (table or Cube based)
    OLAP

  • Large volume tables in SAP

    Hello All,
    Does any1 have the list of all large volume tables(Tables which might create a problem in select queries) present in SAP ?

    Hi Nirav,
    There is no as such specific list. But irrespective of the data in the table if you are providing all the primary key in the select query there will be no issue with SELECT.
    Still if you want the highest size table check transaction DB02.
    Regards,
    Atish

  • Survey tables

    Hello,
    I need to know which are the tables where the questions and answers assigned to a survey are stored.
    Thanks!
    Susana Messias

    Hi Susana,
    The table CRM_SVY_RE_ANSW contains the questions and possible answers in the survey form. But if you're looking for the table where the actual response is being saved, you might want to look at these tables: CRM_SVY_DB_SV and CRM_SVY_DB_SVS. I also had the same question and Piyush and Florin was able to give me these information. I was told that the actual response is encrypted using XML. Thanks to both. You would want to find the thread for more info. Search for 'CRM Survey Tables'.
    Regards,
    Noel De Olazo

  • CRM Survey Tables

    Hi all,
    I have define a Survey with a list of values. The user can select one of these values.
    In the next step the user want to print a document which contains among other things the selected values from the survey (not the survey itself!).
    I create a new report. I found the questions and the answers of the survey. (part of the response of function call 'CRM_SURVEY_READ_OW' importing structure CRMT_SURVEY_WRK field VALUEXML).
    I also found the the list of values but I can't find the selected values.
    Who can help?
    Regards,
    Ralf

    Hello Ralf,
    Hopefully I did understand your question
    The user selects a value from a value list /for example: drop down/ and you would like to retrieve the value the user selected.
    I would suggest searching in the PAI of the survey.
    Open the Survey Suite, select your questionnaire and choose Maintain Survey Attributes (CTRL+F12), there you see the PAI of the survey, where you can debug the tables the values are stored.
    Hope this helps.
    Regards,
    Martin Kuma

  • Large partitioned tables with WM

    Hello
    I've got a few large tables (6-10GB+) that will have around 500k new rows added on a daily basis as part of an overnight batch job. No rows are ever updated, only inserted or deleted and then re-inserted. I want to stop the process that adds the new rows from being an overnight batch to being a near real time process i.e. a queue will be populated with requests to rebuild the content of these tables for specific parent ids, and a process will consume those requests throughout the day rather than going through the whole list in one go.
    I need to provide views of the data asof a point in time i.e. what was the content of the tables at close of business yesterday, and for this I am considering using workspaces.
    I need to keep at least 10 days worth of data and I was planning to partition the table and drop one partition every day. If I use workspaces, I can see that oracle creates a view in place of the original table and creates a versioned table with the LT suffix - this is the table name returned by DBMSMW.GetPhysicalTableName. Would it be considered bad practice to drop partitions from this physical table as I would do with a non version enabled table? If so, what would be the best method for dropping off old data?
    Thanks in advance
    David

    Hello Ben
    Thank you for your reply.
    The table structure we have is like so:
    CREATE TABLE hdr
    (   pk_id               NUMBER PRIMARY KEY,
        customer_id         NUMBER FOREIGN KEY REFERENCES customer,
        entry_type          NUMBER NOT NULL
    CREATE TABLE dtl_daily
    (   pk_id               NUMBER PRIMARY KEY,
        hdr_id              NUMBER FOREIGN KEY REFERENCES hdr
        active_date         DATE NOT NULL,
        col1                NUMBER
        col2                NUMBER
    PARTITION BY RANGE(active_date)
    (   PARTITION ptn_200709
            VALUES LESS THAN (TO_DATE('200710','YYYYMM'))
            TABLESPACE x COMPRESS,
        PARTITION ptn_200710
            VALUES LESS THAN (TO_DATE('200711','YYYYMM'))
            TABLESPACE x COMPRESS
    CREATE TABLE dtl_hourly
    (   pk_id               NUMBER PRIMARY KEY,
        hdr_id              NUMBER FOREIGN KEY REFERENCES hdr
        active_date         DATE NOT NULL,
        active_hour         NUMBER NOT NULL,
        col1                NUMBER
        col2                NUMBER
    PARTITION BY RANGE(active_date)
    (   PARTITION ptn_20070901
            VALUES LESS THAN (TO_DATE('20070902','YYYYMMDD'))
            TABLESPACE x COMPRESS,
        PARTITION ptn_20070902
            VALUES LESS THAN (TO_DATE('20070903','YYYYMMDD'))
            TABLESPACE x COMPRESS
        PARTITION ptn_20070903
            VALUES LESS THAN (TO_DATE('20070904','YYYYMMDD'))
            TABLESPACE x COMPRESS
        ...For every day for 20 years
    /The hdr table holds one or more rows for each customer and has it's own synthetic key generated for every entry as there can be multiple rows having the same entry_type for a customer. There are two detail tables, daily and hourly, which hold detail data at those two granularities. Some customers require hourly detail, in which case the hourly table is populated and the daily table is populated by aggregating the hourly data. Other customers require only daily data in which case the hourly table is not populated.
    At the moment, changes to customer data require that the content of these tables are rebuilt for that customer. This rebuild is done every night for the changed customers and I want to change this to be a near real time rebuild. The rebuild involves deleteing all existing entries from the three tables for the customer and then re-inserting the new set using new synthetic keys. If we do make this near real time, we need to be able to provide a snapshot of the data asof close of business every day, and we need to be able to report as of a point of time up to 10 days in the past.
    For any one customer, they may have rows in the hourly table that goes out 20 years at a hourly granularity, but once the active date has passed(by 10 days), we no longer need to keep it. This is why we were considering partitioning as it gives us a simple way of dropping off old data, and as a nice side effect, helps to improve performance of queries that are looking for active data between a range of dates (which is most of them).
    I did have a look at the idea of save points but I wasn't sure it would be efficient. So in this case, would the idea be that we don't partition the table but instead at close of business every day, we create a savepoint like "savepoint_20070921" and instead of using dbms_wm.gotodate. we would use dbms_wm.gotosavepoint. Then every day we would do
    DBMS_WM.DeleteSavepoint(
       workspace                   => 'LIVE',
       savepoint_name              => 'savepoint_20070910', --10 days ago
       compress_view_wo_overwrite  => TRUE,
    DBMS_WM.CompressWorkspace(
       workspace                   => 'LIVE,
       compress_view_wo_overwrite  => TRUE,
       firstSP                     => 'savepoint_20070911', --the new oldest save point
       );Is my understanding correct?
    David
    Message was edited by:
    fixed some formatting
    David Tyler

  • Large Fact Tables

    I have some transactional tables that are quite large. Anywhere from 1 to 3 million records. I have been doing my best to find information on the best way to deal with these and I am having some trouble. I am a little tight for time and could use a little help from others who have dealt with large datas sets before. What are some of the best ways to deal with this.
    I have tried creating database level partitions by date which didn't seem to affect the OBI performance much. If I actually create different physical tables through the ETL process for each year it helps but I would like to avoid needing to create and manage that many different mappings.
    Is the best way to do this to use the OBI caching and seed the cache after loading the data? I have even tried a few indexes and not had much luck.
    Any help is greatly appreciated and I thank everyone in advance for their time. :)

    Presuming Oracle backend database.....(you didnt say)
    Partitioning would seem a quick (and expensive $$$$) win, if you partition on date, be sure to use the same date key to perform 'partition pruning' , which is to say give the optimizer a chance to throw away all unrequired partitions before it starts going to disk. When you say 'didnt improve performance much' you need to give a partition wise join for the partitioning to become effective at query time - did you do this ? for example a dashboard prompt with a filter on week / month would help (assuming your week / month time dim column is joined on partition key)
    Is your data model a star schema ? Do you have dimension metadata in your database ? I'd be looking at Materialized views and looking for some sort of aggregate rollup query re-write whereby you queries are re-written at run time to the Mview and does not hit your fact table.
    As for caching, OBIEE caching is the last resort after you exhaust every other alternative, its not there to speed up queries and you would be hiding the problem, what when a user drills down or needs to run an ad-hoc report that is not cached ?
    I would start with understanding Oracle execution plans, review your data model (with the view on speedy data extraction) and look at what Oracle DB gives for data warehousing appliances :
    parallelism is your friend!
    star transformation,
    Mviews (table or Cube based)
    OLAP

Maybe you are looking for