Large number of sequences on Oracle 8i
One possible solution to an issue I am facing is to create a very large number(~20,000) of sequences in the database. I was wondering if anybody has any experience with this, whether it is a good idea or I should find another solution.
Thanks.
Why not use one (or certainly less than 20000) sequence(s) and feed all your needs from it (them) ? Do your tables absolutely require sequential numbers or just unique ones ????
I had 6 applications a few years ago sharing the same database, about 80% of the tables in each application used sequences for primary key values and I fed each system off of one sequence.
All I was after was a unique id, so this worked fine. Besides in any normal course of managing even an OLTP system, you're bound to have records deleted, so there will be "holes" in the numbering anyway.
Similar Messages
-
Large number of people access Oracle form6i
Is it OK for large number of people(over 1000 people simultaneously) to access the Oracle form(6i) on the web?
nullForms Scales - so long as you have enough resources it can cope.
Regards
Grant -
Creation of Sequences in Oracle
Hi
If i want to create large number of sequences around 30 to 35 sequences in a single database,Is there any performance down in oracle server or any limitation?
Please explain if so ! As in next project we are going to create around 30 to 35 sequences.
Thanks in advance
Raj..Raj
Sequences shouldnt degrade the performance theoritically. Because , they will not be activated unless you do explicitly. They are not
like any daemon programs or background processes running . So it should be ok .
As long as you can manage that large number of Sequences it should be fine.
prakash -
Oracle Error 01034 After attempting to delete a large number of rows
I sent the command to delete a large number of rows from a table in an oracle database (Oracle 10G / Solaris). The database files are located at /dbo partition. Before the command the disk space utilization was at 84% and now it is at 100%.
SQL Command I ran:
delete from oss_cell_main where time < '30 jul 2009'
If I try to connect to the database now I get the following error:
ORA-01034: ORACLE not available
df -h returns the following:
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d6 4.9G 5.0M 4.9G 1% /db_arch
/dev/md/dsk/d7 20G 11G 8.1G 59% /db_dump
/dev/md/dsk/d8 42G 42G 0K 100% /dbo
I tried to get the space back by deleting all the data in the table oss_cell_main :
drop table oss_cell_main purge
But no change in df output.
I have tried solving it myself but could not find sufficient directed information. Even pointing me to the right documentation will be higly appreciated. I have already looking at the following:
du -h :
du -h8K ./lost+found
1008M ./system/69333
1008M ./system
10G ./rollback/69333
10G ./rollback
27G ./data/69333
27G ./data
1K ./inx/69333
2K ./inx
3.8G ./tmp/69333
3.8G ./tmp
150M ./redo/69333
150M ./redo
42G .
I think its the rollback folder that has increased in size immensely.
SQL> show parameter undo
NAME TYPE VALUE
undo_management string AUTO
undo_retention integer 10800
undo_tablespace string UNDOTBS1
select * from dba_tablespaces where tablespace_name = 'UNDOTBS1'
TABLESPACE_NAME BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS
MAX_EXTENTS PCT_INCREASE MIN_EXTLEN STATUS CONTENTS LOGGING FOR EXTENT_MAN
ALLOCATIO PLU SEGMEN DEF_TAB_ RETENTION BIG
UNDOTBS1 8192 65536 1
2147483645 65536 ONLINE UNDO LOGGING NO LOCAL
SYSTEM NO MANUAL DISABLED NOGUARANTEE NO
Note: I can reconnect to the database for short periods of time by restarting the database. After some restarts it does connect but for a few minutes only but not long enough to run exp.Check the alert log for errors.
Select file_name, bytes from dba_data_files order by bytes;
Try to shrink some datafiles to get space back. -
Passing a large number of column values to an Oracle insert procedure
I am quite new to the ODP space, so can someone please tell me what's the best and most efficient way to pass a large number of column values to an Oracle procedure from my C# program ? Passing a small number of values as parameters seem OK but when there are many, this seems inelegant.
Passing a small number
of values as parameters seem OK but when there are
many, this seems inelegant.Is it possible that your table with a staggering amount of columns or method that collapses without so many inputs is ultimately what is inelegant?
I once did a database conversion from VAX RMS system with a "table" with 11,000 columns to a normalized schema in an Oracle database. That was inelegant.
Michael O
http://blog.crisatunity.com -
How to calculate the area of a large number of polygons in a single query
Hi forum
Is it possible to calculate the area of a large number of polygons in a single query using a combination of SDO_AGGR_UNION and SDO_AREA? So far, I have tried doing something similar to this:
select sdo_geom.sdo_area((
select sdo_aggr_union ( sdoaggrtype(mg.geoloc, 0.005))
from mapv_gravsted_00182 mg
where mg.dblink = 521 or mg.dblink = 94 or mg.dblink = 38 <many many more....>),
0.0005) calc_area from dualThe table MAPV_GRAVSTED_00182 contains 2 fields - geoloc (SDO_GEOMETRY) and dblink (Id field) needed for querying specific polygons.
As far as I can see, I need to first somehow get a single SDO_GEOMETRY object and use this as input for the SDO_AREA function. But I'm not 100% sure, that I'm doing this the right way. This query is very inefficient, and sometimes fails with strange errors like "No more data to read from socket" when executed from SQL Developer. I even tried with the latest JDBC driver from Oracle without much difference.
Would a better approach be to write some kind of stored procedure, that adds up all the single geometries by adding each call to SDO_AREA on each single geometry object - or what is the best approach?
Any advice would be appreciated.
Thanks in advance,
JacobHi
I am now trying to update all my spatial table with SRID's. To do this, I try to drop the spatial index first to recreate it after the update. But for a lot of tables I can't drop the spatial index. Whenever I try to DROP INDEX <spatial index name>, I get this error - anyone know what this means?
Thanks,
Jacob
Error starting at line 2 in command:
drop index BSSYS.STIER_00182_SX
Error report:
SQL Error: ORA-29856: error occurred in the execution of ODCIINDEXDROP routine
ORA-13249: Error in Spatial index: cannot drop sequence BSSYS.MDRS_1424B$
ORA-13249: Stmt-Execute Failure: DROP SEQUENCE BSSYS.MDRS_1424B$
ORA-29400: data cartridge error
ORA-02289: sequence does not exist
ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 27
29856. 00000 - "error occurred in the execution of ODCIINDEXDROP routine"
*Cause: Failed to successfully execute the ODCIIndexDrop routine.
*Action: Check to see if the routine has been coded correctly.
Edit - just found the answer for this in MetaLink note 241003.1. Apparently there is some internal problem when dropping spatial indexes, some objects gets dropped that shouldn't be. Solution is to manually create the sequence it complains it can't drop, then it works... Weird error. -
How to show data from a table having large number of columns
Hi ,
I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns.
Is it possible to design report in below format(half columns on one side of page, half on other side of page :
Column1
Data
Column11
Data
Column2
Data
Column12
Data
Column3
Data
Column13
Data
Column4
Data
Column14
Data
Column5
Data
Column15
Data
Column6
Data
Column16
Data
Column7
Data
Column17
Data
Column8
Data
Column18
Data
Column9
Data
Column19
Data
Column10
Data
Column20
Data
I am using Apex 4.2.3 version on oracle 11g xe.user2602680 wrote:
Please update your forum profile with a real handle instead of "user2602680".
I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns.
Is it possible to design report in below format(half columns on one side of page, half on other side of page :
Column1
Data
Column11
Data
Column2
Data
Column12
Data
Column3
Data
Column13
Data
Column4
Data
Column14
Data
Column5
Data
Column15
Data
Column6
Data
Column16
Data
Column7
Data
Column17
Data
Column8
Data
Column18
Data
Column9
Data
Column19
Data
Column10
Data
Column20
Data
I am using Apex 4.2.3 version on oracle 11g xe.
Yes, this can be achieved using a custom named column report template. -
Loading FF4 wiped out a large number of tabs, how do I restore?
I thought I was loading a security update. WHAT?! ALL MY TABS I WAS USING ARE WIPED OUT? Really? Or have you added a way to restore them?
Apparently the default setting on FF4 is to destroy the preserved tabs during the update.
My browser had a large number of tabs preserved. FF4 apparently wiped them out without asking. WTF people, why pull such unexpected stuff on your users. (Yes, I'm really pissed. It will take me hours to track down a couple of those pages again.) It wouldn't have been so bad if I knew I was getting FF4. Then I could bookmark the open pages. Again, I thought it was a security update.
The fix: have FF4 detect preserved tabs once (during the update,) then display a warning message to allow users to bring the tabs into the new updated browser.Thanks. I assumed I needed to set in and out points. I have not done an image sequence in FCP. If I have a 10 second image and a 2 second transition, but then change the duration of the transition to say 4 seconds then FCP will let me alter the transition duration?
With regards to movie clips, what happens with setting global in and out points? How would I do that?
Thanks in advance -
After by applying Patch 9440398 as per Oracle's Doc ID 1072226.1, I have successfully created a CODE 128 barcode.
But I am having an issue when creating a barcode whose value is a large number. Specifically, a number larger than around 16 or so digits.
Here's my situation...
In my RTF template I am encoding a barcode for the number 420917229102808239800004365998 as follows:
<?format-barcode:420917229102808239800004365998;'code128c'?>
I then run the report and a PDF is generated with the barcode. Everything looks great so far.
But when I scan the barcode, this is the value I am reading (tried it with several different scanner types):
420917229102808300000000000000
So:
Value I was expecting: 420917229102808239800004365998
Value I actually got: 420917229102808300000000000000
It seems as if the number is getting rounded at the 16th digit (or so, it varies depending of the value I use).
I have tried several examples and all seem to do the same. But anything with 15 digits or less seems to works perfectly.
Any ideas?
MannyYes, I have.
But I have found the cause now.
When working with parameters coming in from the concurrent manager, all the parameters define in the concurrent program in EBS need to be in the same case (upper, lower) as they have been defined in the data template.
Once I changed all to be the same case, it worked.
thanks for the effort.
regards
Ronny -
Problem fetch large number of records
Hi
I want to fetch large number of record from database.and I use secondary index database for improve performance for example my database has 100000 records and query fetch 10000 number of records from this database .I use secondary database as index and move to secondary database until fetch all of the information that match for my condition.but when I move to this loop performance terrible.
I know when I use DB_MULTIPLE fetch all of the information and performance improves but
I read that I can not use this flag when I use secondary database for index.
please help me and say me the flag or implement that fetch all of the information all to gether and I can manage this data to my language
thanks alot
regards
saeedHi Saeed,
Could you post here your source code, that is compiled and ready to be executed, so we can take a look at the loop section ?
You won't be able to do bulk fetch, that is retrieval with DB_MULTIPLE given the fact that the records in the primary are unordered by master (you don't have 40K consecutive records with master='master1'). So the only way to do things in this situation would be to position with a cursor in the secondary, on the first record with the secondary key 'master1' retrieve all the duplicate data (primary keys in the primary db) one by one, and do the corresponding gets in the primary database based on the retrieved keys.
Though, there may be another option that should be taken into consideration, if you are willing to handle more work in your source code, that is, having a database that acts as a secondary, in which you'll update the records manually, with regard to the modifications performed in the primary db, without ever associating it with the primary database. This "secondary" would have <master> as key, and <std_id>, <name> (and other fields if you want to) as data. Note that for every modification that your perform on the std_info database you'll have to perform the corresponding modification on this database as well. You'll then be able to do the DBC->c_get() calls on this database with the DB_MULTIPLE flag specified.
I have other question.is there any way that fetch information with number of record?
for example fetch information that located third record of my database.I guess you're refering to logical record numbers, like the relational database's ROW_ID. Since your databases are organized as BTrees (without the DB_RECNUM flag specified) this is not possible directly.You could perform this if use a cursor and iterate through the records, and stop on the record whose number is the one you want (using an incrementing counter to keep track of the position). If your database could have operated with logical record numbers (BTree with DB_RECNUM, Queue or Recno) this would have been possible directly:
http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/logrec.html
http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/renumber.html
Regards,
Andrei -
Approach to parse large number of XML files into the relational table.
We are exploring the option of XML DB for processing a large number of files coming same day.
The objective is to parse the XML file and store in multiple relational tables. Once in relational table we do not care about the XML file.
The file can not be stored on the file server and need to be stored in a table before parsing due to security issues. A third party system will send the file and will store it in the XML DB.
File size can be between 1MB to 50MB and high performance is very much expected other wise the solution will be tossed.
Although we do not have XSD, the XML file is well structured. We are on 11g Release 2.
Based on the reading this is what my approach.
1. CREATE TABLE XML_DATA
(xml_col XMLTYPE)
XMLTYPE xml_col STORE AS SECUREFILE BINARY XML;
2. Third party will store the data in XML_DATA table.
3. Create XMLINDEX on the unique XML element
4. Create views on XMLTYPE
CREATE OR REPLACE FORCE VIEW V_XML_DATA(
Stype,
Mtype,
MNAME,
OIDT
AS
SELECT x."Stype",
x."Mtype",
x."Mname",
x."OIDT"
FROM data_table t,
XMLTABLE (
'/SectionMain'
PASSING t.data
COLUMNS Stype VARCHAR2 (30) PATH 'Stype',
Mtype VARCHAR2 (3) PATH 'Mtype',
MNAME VARCHAR2 (30) PATH 'MNAME',
OIDT VARCHAR2 (30) PATH 'OID') x;
5. Bulk load the parse data in the staging table based on the index column.
Please comment on the above approach any suggestion that can improve the performance.
Thanks
AnuragTThanks for your response. It givies more confidence.
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
TNS for Linux: Version 11.2.0.3.0 - Production
Example XML
<SectionMain>
<SectionState>Closed</SectionState>
<FunctionalState>CP FINISHED</FunctionalState>
<CreatedTime>2012-08</CreatedTime>
<Number>106</Number>
<SectionType>Reel</SectionType>
<MachineType>CP</MachineType>
<MachineName>CP_225</MachineName>
<OID>99dd48cf-fd1b-46cf-9983-0026c04963d2</OID>
</SectionMain>
<SectionEvent>
<SectionOID>99dd48cf-2</SectionOID>
<EventName>CP.CP_225.Shredder</EventName>
<OID>b3dd48cf-532d-4126-92d2</OID>
</SectionEvent>
<SectionAddData>
<SectionOID>99dd48cf2</SectionOID>
<AttributeName>ReelVersion</AttributeName>
<AttributeValue>4</AttributeValue>
<OID>b3dd48cf</OID>
</SectionAddData>
- <SectionAddData>
<SectionOID>99dd48cf-fd1b-46cf-9983</SectionOID>
<AttributeName>ReelNr</AttributeName>
<AttributeValue>38</AttributeValue>
<OID>b3dd48cf</OID>
<BNCounter>
<SectionID>99dd48cf-fd1b-46cf-9983-0026c04963d2</SectionID>
<Run>CPFirstRun</Run>
<SortingClass>84</SortingClass>
<OutputStacker>D2</OutputStacker>
<BNCounter>54605</BNCounter>
</BNCounter>
I was not aware of Virtual column but looks like we can use it and avoid creating views by just inserting directly into
the staging table using virtual column.
Suppose OID id is the unique identifier of each XML FILE and I created virtual column
CREATE TABLE po_Virtual OF XMLTYPE
XMLTYPE STORE AS BINARY XML
VIRTUAL COLUMNS
(OID_1 AS (XMLCAST(XMLQUERY('/SectionMain/OID'
PASSING OBJECT_VALUE RETURNING CONTENT)
AS VARCHAR2(30))));
1. My question is how then I will write this query by NOT USING COLMUN XML_COL
SELECT x."SECTIONTYPE",
x."MACHINETYPE",
x."MACHINENAME",
x."OIDT"
FROM po_Virtual t,
XMLTABLE (
'/SectionMain'
PASSING t.xml_col <--WHAT WILL PASSING HERE SINCE NO XML_COL
COLUMNS SectionType VARCHAR2 (30) PATH 'SectionType',
MachineType VARCHAR2 (3) PATH 'MachineType',
MachineName VARCHAR2 (30) PATH 'MachineName',
OIDT VARCHAR2 (30) PATH 'OID') x;
2. Insetead of creating the view then Can I do
insert into STAGING_table_yyy ( col1 ,col2,col3,col4,
SELECT x."SECTIONTYPE",
x."MACHINETYPE",
x."MACHINENAME",
x."OIDT"
FROM xml_data t,
XMLTABLE (
'/SectionMain'
PASSING t.xml_col <--WHAT WILL PASSING HERE SINCE NO XML_COL
COLUMNS SectionType VARCHAR2 (30) PATH 'SectionType',
MachineType VARCHAR2 (3) PATH 'MachineType',
MachineName VARCHAR2 (30) PATH 'MachineName',
OIDT VARCHAR2 (30) PATH 'OID') x
where oid_1 = '99dd48cf-fd1b-46cf-9983';<--VIRTUAL COLUMN
insert into STAGING_table_yyy ( col1 ,col2,col3
SELECT x."SectionOID",
x."EventName",
x."OIDT"
FROM xml_data t,
XMLTABLE (
'/SectionMain'
PASSING t.xml_col <--WHAT WILL PASSING HERE SINCE NO XML_COL
COLUMNS SectionOID PATH 'SectionOID',
EventName VARCHAR2 (30) PATH 'EventName',
OID VARCHAR2 (30) PATH 'OID',
) x
where oid_1 = '99dd48cf-fd1b-46cf-9983';<--VIRTUAL COLUMN
Same insert for other tables usind the OID_1 virtual coulmn
3. Finaly Once done how can I delete the XML document from XML.
If I am using virtual column then I beleive it will be easy
DELETE table po_Virtual where oid_1 = '99dd48cf-fd1b-46cf-9983';
But in case we can not use the Virtual column how we can delete the data
Thanks in advance
AnuragT -
DBA Reports large number of inactive sessions with 11.1.1.1
All,
We have installed System 11.1.1.1 on some 32 bit windows test machines running Windows Server 2003. Everything seems to be working fine, but recently the DBA is reporting that there are a large number of inactive sessions throwing alarms that we are reaching our Max Allowed Process on the Oracle Database server. We are running Oracle 10.2.0.4 on AIX.
We also have some System 9.3.1 Development servers that point at separate schemas in this environment and we don't see the same high number of inactive connections?
Most of the inactive connections are coming from Shared Services and Workspace. Anyone else see this or have any ideas?
Thanks for any responses.
Keith
Just a quick update. Originally I said this was only with 11.1.1.1 but we see the same high number of inactive sessions in 9.3. Anyone else see a large number of inactive sessions. They show up in Oracle as JDBC_Connect_Client. Does Shared Service, Planning Workspace etc utilize persistent connections or does it just abandon sessions when the windows service associated with an application is shutdown? Any information or thoughts are appreciated.
Edited by: Keith A on Oct 6, 2009 9:06 AMHi,
Not the answer you are looking for but have you logged it with Oracle as you might not get many answers to this question on here.
Cheers
John
http://john-goodwin.blogspot.com/ -
Large number of http posts navigating between forms
Hi,
i'm not really a forms person (well not since v3/4 running character mode on a mainframe!), so please be patient if I'm not providing the most useful information.
An oracle forms 10 system that I have fallen into supporting has to me very poor performance in doing simple things like navigating between forms/tabs.
Looking at the java console (Running Sun JRE 1.6.0_17), and turning on network tracing, I can see a much larger number of post requests than I would expect (I looked here first as initially we had an issue with every request going via a proxy server, and I wondered if we had lost the bypass proxy setting). Only a normal number of GETS though.
Moving to one particualr detail form from a master record is generating over 300 post requests - I'v confirmed this looking at the Apache logs on the server. This is the worst one I have found, but in general the application appears to be extremely 'chatty'
The only other system I work with which uses forms doesn't generate anything like these numbers of requests, which makes me think this isn't normal (As well as the fact this particular form is very slow to open)
This is a third party application, so i don't have access to the source unfortunately.
Is there anything we should look at in our setup, or is this likely to be an application coding issue? This app is a recent conversion from a forms 6 client server application (Which itself ran ok, at least this bit of the application did with no delays in navigation between screens).
I'm happy to go back to the supplier, but it might help if I can point them into some specific directions, plus i'd like to know what's going on too!
Regards,
CarlSounds odd. 300 Requests is by far too much. As it was a C/S application: did they do anything else except the recompile on 10g? Moving from C/S to 10g webforms seems to be easy as you just need to recompile but in fact it isn't. There are many things which didn't matter in a C/S environment but have disastrous effects once the form is deployed over the web. The synchronize built in for example. In C/S calls to synchronize wasn't that bad; But when you are using web deployed forms...each call to synchronize is a roundtrip. The usage of timers is also best kept on a low level in webforms for example.
A good starting point for the whole do's and dont's when moving forms to the web is the forms upgrade center:
http://www.oracle.com/technetwork/developer-tools/forms/index-095046.html
If you don't have the source code available that's unfortune; but if you want to know what's happening behind the scenes there is the possibility to trace a forms session:
http://download.oracle.com/docs/cd/B14099_19/web.1012/b14032/tracing002.htm#i1035515
maybe this sheds some light upon what's going on.
cheers -
Large number of FNDSM and FNDLIBR processes
hi,
description of my system
Oracle EBS 11.5.10 + oracle 9.2.0.5 +HP UX 11.11
problem : ther are large number of FNDSM , FNLIBR and sh processes during peak load around 300 , but even at no load these processes dont come down , though oracle processes
come down from 250 to 80 but these apps processes just dont get killed automatically .
can i kill these processes manually??
one more thing , even after stopping apllications with adstpall.sh , these processes dont get killed , is it normal??, so i just dismount database so as to kill these processes
and under what circumstances , should i run cmclean ?Hi,
problem : ther are large number of FNDSM , FNLIBR and sh processes during peak load around 300 , but even at no load these processes dont come down , though oracle processesThis means there are lots of zombie processes running and all these need to be killed.
Shutdown your application and database and take a bounce of the server as there are too many zombie processes. I have once faced the issue in which due to these zombie process CPU utilization has gone to 100% on continuous count.
Once you restart the server, start database and listener run cmclean and start the application services.
one more thing , even after stopping apllications with adstpall.sh , these processes dont get killed , is it normal??, so i just dismount database so as to kill these processesNo it's not normal and should not be neglected. I should also advice you to run the [Oracle Application Object Library Concurrent Manager Setup Test|https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=200360.1]
and under what circumstances , should i run cmclean ?[CMCLEAN.SQL - Non Destructive Script to Clean Concurrent Manager Tables|https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=134007.1]
You can run the cmclean if you find that after starting the applications managers are not coming up or actual processes are not equal to target processes.
Thanks,
Anchorage :) -
JDev: af:table with a large number of rows
Hi
We are developing with JDeveloper 11.1.2.1. We have a VO that returns > 2.000.000 of rows and that we display in a af:table with access mode 'scrollable' (the default) and 'in Batches of' 101. The user can select one row and do CRUD operations in the VO with popups. The application works fine but I read that scroll very large number of rows is not a good idea because can cause OutOfMemory exception if the user uses the scroll bar many times. I have tried with access mode in 'Range Paging' but the application works in strange ways. Sometimes when I select a row to edit, if the selected row is the number 430 in the popup is show it the number 512 and when I want to insert a new row throws this exception:
oracle.jbo.InvalidOperException: JBO-25053: No se puede navegar con filas no enviadas en RangePaging RowSet.
at oracle.jbo.server.QueryCollection.get(QueryCollection.java:2132)
at oracle.jbo.server.QueryCollection.fetchRangeAt(QueryCollection.java:5430)
at oracle.jbo.server.ViewRowSetIteratorImpl.scrollRange(ViewRowSetIteratorImpl.java:1329)
at oracle.jbo.server.ViewRowSetIteratorImpl.setRangeStartWithRefresh(ViewRowSetIteratorImpl.java:2730)
at oracle.jbo.server.ViewRowSetIteratorImpl.setRangeStart(ViewRowSetIteratorImpl.java:2715)
at oracle.jbo.server.ViewRowSetImpl.setRangeStart(ViewRowSetImpl.java:3015)
at oracle.jbo.server.ViewObjectImpl.setRangeStart(ViewObjectImpl.java:10678)
at oracle.adf.model.binding.DCIteratorBinding.setRangeStart(DCIteratorBinding.java:3552)
at oracle.adfinternal.view.faces.model.binding.RowDataManager._bringInToRange(RowDataManager.java:101)
at oracle.adfinternal.view.faces.model.binding.RowDataManager.setRowIndex(RowDataManager.java:55)
at oracle.adfinternal.view.faces.model.binding.FacesCtrlHierBinding$FacesModel.setRowIndex(FacesCtrlHierBinding.java:800)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
<LoopDiagnostic> <dump> [8261] variableIterator variables passivated >>> TrackQueryPerformed def
<LifecycleImpl> <_handleException> ADF_FACES-60098:El ciclo de vida de Faces recibe excepciones no tratadas en la fase RENDER_RESPONSE 6
What is the best way to display this amount of data in a af:table and do CRUD operations?
Thanks
Edited by: 972255 on 05/12/2012 09:51Hi,
honestly, the best way is to provide users with an option to filter the result set displayed in the table to reduce the result set size. No-one will query 2.00.000 rows using the table scrollbar.
So one hint for optimization would be a query form (e.g. af:query)
To answer your question "srollable" vs. "page range", see
http://docs.oracle.com/cd/E21043_01/web.1111/b31974/bcadvvo.htm#ADFFD1179
Pay attention to what is written in the context of +"The range paging access mode is typically used for paging through read-only row sets, and often is used with read-only view objects.".+
Frank
Maybe you are looking for
-
Creating an expandable text field without a scrollbar
Hi All! Hope someone can help. I need to create a flowable subform that will allow text fields to expand without a scrollbar. I have been researching the forums and have found this thread: http://forums.adobe.com/message/2372965 When modifying the te
-
Camera Raw 8 very slow(unuseable) in Photoshop CS6
Camera Raw 7 had it's major problems with Photoshop CS6 but had a few improvements to speed with the last update on my system. The update to Raw 8 has slowed everything down again to a painful 3-4 second wait before controls, crop tool etc. responds.
-
CC Desktop not responding. When selected, CC opens in the Apps section, but no apps are displayed, and it seems to be downing loading - see screenshot. It seems to be stuck in this position. (CC icon at top of mac screen is black in color; used to
-
[svn] 3832: small change in sample file
Revision: 3832 Author: [email protected] Date: 2008-10-22 15:33:51 -0700 (Wed, 22 Oct 2008) Log Message: small change in sample file Modified Paths: flex/sdk/trunk/frameworks/projects/flex4/asdoc/en_US/mx/components/examples/FxListExample .mxml
-
Is anyone able to scan with an HP 3055 all-in-one using Leopard
MY hp3055 all in one will not scan with Leopard. HP support says I should wait until "sometime in the future" Any ideas?