Small Dumpfile Size cause extreme large table space size
Hi,
I have a big problem. I have a Oracle 9i dump, with a size of 1,3 MBytes. If I import this file (on the same server) to another tablespace / tableowner this tablespace grows to 5,5 GB. In Oracle 8i a similar dump uses only 37 MB.
Has somebody made similar experiences and solved this problem?
Please help me.
Thanks in advance.
Ulf
Hi,
I have a big problem. I have a Oracle 9i dump, with a size of 1,3 MBytes. If I import this file (on the same server) to another tablespace / tableowner this tablespace grows to 5,5 GB. In Oracle 8i a similar dump uses only 37 MB.
Has somebody made similar experiences and solved this problem?
Please help me.
Thanks in advance.
Ulf
Similar Messages
-
Working with large tables - thumbnail size
Hi,
I'm working with some oversized tables in IBA. What I usually do is make the table as needed and then use the "Uses thumbnail on page" option in the Layout section of the inspector and adjust the thumbnail size to fit the page as needed. What happened this time, that after the document has been closed and reopened, some tables reset the thumbnail size back to default, which is small. I can't seem to find what's causing this, one table is not behaving like that, although it's been done using the same method. Anyone else ran into the same thing? Any suggestions?
Thanks in advance.Why don't you use a stored proc?
Why are you ordering it?
Should I take partial entries in a loop? Yep. Because software isn't perfect. No point in attempting to process the universe when you know it will fail sometime and it is easier to handle smaller failures than large ones (and you won't have to redo everything.) -
Images should grow or shrink when the user chooses a larger or smaller font size in ADE.
Hi All,
I use percentage values (in "width" attribute within "img" element) for image that should grow or shrink when the user
chooses a larger or smaller font size. The adjustment affects not only the size of the text but also images but the images
are getting distorted or blurred when user chooses larger font size in Adobe Digital Editions.
Can anyone, please guide me; how to get non-distorted and non-blur images in epub package?
If I directly use images, without svg format i.e. png or jpg files, then images get overlapped on body-text in two column display in ADE.
OR
if I directly use jpg or png with width attribute varying in percentage value then image shrinks and grows as user chooses smaller and larger font size but the problem of distorted image remains as it is
<div class="media-group"><img src="images/pa0000016g26001.png or jpg" alt="Image" width="60 or 70%" /></div>
Attached is the package (Book.epub) which has test cases.
========================================================
Please find below sample coding used in attached epub package;
In XHTML:
<div class="media-group"><img src="images/pa0000016g26001.svg" alt="Image" width="100%" /></div>
In SVG:
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 12.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 51448) -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd" [
<!ENTITY ns_svg "http://www.w3.org/2000/svg">
<!ENTITY ns_xlink "http://www.w3.org/1999/xlink">
]>
<svg version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
width="386px" height="554px" viewBox="0 0 386 554" enable-background="new 0 0 386 554" xml:space="preserve">
<image overflow="visible" width="772" height="1108" xlink:href="pa0000016g26001.png" transform="matrix(0.50 0 0 0.50 0 0)">
</image>
</svg>
========================================================
Cheers
VikasHi DenisonDoc,
There is no option right now to set properties globally primarily for Text fields. You may make sure fields doesn't contain anything.
Select all the text filed from the form and right click any of the selected field make sure all of them are selected choose properties --> Appearance there you can choose Font Size and Font type.
- End users cant change size and type of font. It is up to designer.
Regards,
Ajlan Huda. -
ALV: how to save context space for large tables ?
Dear collegues,
We are displaying an ALV table that is quite large. Therefore, the corrsponding DDIC structure and the WD context is large. This has an impact on performance and the load size of the program. Now we will enhance the ALV table again.
Example: for an icon and its explaining tooltip that are displayed in the ALV: there is are context fields required like "SOURCE_FIELDNAME" for the tooltip as well as for for the icon. They need a lot of characters for each tooltip and icon).
Question: do you have an idea, how to save context space for those ALV fields ?
Best regards,
Christian>We are displaying an ALV table that is quite large.
Do you mean quite large as in a large number of columns or as in a large number of rows (or both)? I assume that the problem is probably more related to a large number of rows. For very large tables, you should consider using the table instead of the ALV. For very large tables you can even use a technique called context paging to keep only a subset of the data in the context memory at a time. Here is a recent blog that I created on the topic with demonstrations of different techniques for table sharing, shared memory, and context paging when dealing with large tables in Web Dynpro ABAP:
Web Dynpro ABAP: How Fast Can You Consume 1 Million Rows? -
hi all,
how can we set the table spaces and extent sizesHi
All the things you can do when you are trying to create a table.
You create a table using SE11.
After that you have assign the fields to the table and later you need to give the technical settings to a table.
Here you need to specify the table size which you call it as Extents.
Table spaces are also defined to a table there itself.
Reward if useful -
SELECTing from a large table vs small table
I posted a question few months back about teh comparison between INSERTing to a large table vs small table ( fewer number of rows ), in terms of time taken.
The general consensus seemed to be that it would be teh same, except for teh time taken to update the index ( which will be negligible ).
1. But now, following teh same logic, I m confused why SELECTINg from a large table should be more time taking ("expensive" ) than SELECTing from a small table.
( SELECTing using an index )
My understanding of how Oracle works internally is this :
It will first locate the ROWID from teh B-Tree that stores the index.
( This operation is O(log N ) based on B-Tree )
ROWID essentially contains teh file pointer offset of teh location of the data in teh disk.
And Oracle simply reads teh data from teh location it deduced from ROWID.
But then the only variable I see is searching teh B-Tree, which should take O(log N ) time for comparison ( N - number of rows )
Am I correct above.
2. Also I read that tables are partitioned for performance reasons. I read about various partiotion mechanisms. But cannot figure out how it can result in performance improvement.
Can somebody please helpuser597961 wrote:
I posted a question few months back about teh comparison between INSERTing to a large table vs small table ( fewer number of rows ), in terms of time taken.
The general consensus seemed to be that it would be teh same, except for teh time taken to update the index ( which will be negligible ).
1. But now, following teh same logic, I m confused why SELECTINg from a large table should be more time taking ("expensive" ) than SELECTing from a small table.
( SELECTing using an index )
My understanding of how Oracle works internally is this :
It will first locate the ROWID from teh B-Tree that stores the index.
( This operation is O(log N ) based on B-Tree )
ROWID essentially contains teh file pointer offset of teh location of the data in teh disk.
And Oracle simply reads teh data from teh location it deduced from ROWID.
But then the only variable I see is searching teh B-Tree, which should take O(log N ) time for comparison ( N - number of rows )
Am I correct above.
2. Also I read that tables are partitioned for performance reasons. I read about various partiotion mechanisms. But cannot figure out how it can result in performance improvement.
Can somebody please helpIt's not going to be that simple. Before your first step (locate ROWID from index), it will first evaluate various access plans - potentially thousands of them - and choose the one that it thinks will be best. This evaluation will be based on the number of rows it anticipates having to retrieve, whether or not all of the requested data can be retrived from the index alone (without even going to the data segment), etc. etc etc. For each consideration it makes, you start with "all else being equal". Then figure there will be dozens, if not hundreds or thousands of these "all else being equal". Then once the plan is selected and the rubber meets the road, we have to contend with the fact "all else is hardly ever equal". -
Al of a sudden my iPhone 5C has extremely large text size. It's so large I cannot swipe, pinch or make it respond to any command. Any suggestions on what happened, how to fix it and how to prevent it from happening again?
Hi Marylane,
Thank you for using Apple Support Communities.
Your description of the text being extremely large makes me think you have activitated the Zoom feature of Accessibility. See the following article and use the same command to disable Zoom as to enable it:
To enable Zoom, use three fingers and double-tap the screen.
To increase the level of Zoom, use three fingers to double-tap and hold, then move your fingers up or down on the screen to increase or decrease magnification.
Use Accessibility features in iOS - Apple Support
Cheers,
Jeff D. -
Since opening an existing Pages doc, all the previously centred images and tables are now too far to the rights - leaving a large blank space to the left of each page. Any ideas re fixing this? Problem remains even when I open an earlier version on the previous version of Pages
It has always been that the upgrade tries to be the "master" over the older version.
Anyway in Pages 09 you have probably the Comments field showing, making all contetn to move to the right.
Go to View menu > Hide Comments. -
Hello,
I would like to list the name and size of table spaces and also extend them if they are almost filled. Since I'm more familiar with T/SQL I would greatly appreciate your helpSQL> select u.tblspc "TBLSPC", a.fbytes "ALLOC", u.ebytes USED, a.fbytes-u.ebytes UNUSED,
2 (u.ebytes/a.fbytes)*100 USEDPCT
3 from (select tablespace_name tblspc, sum(bytes) ebytes
4 from sys.dba_extents
5 group by tablespace_name) u,
6 (select tablespace_name tblspc, sum(bytes) fbytes
7 from sys.dba_data_files
8 group by tablespace_name) a
9 where u.tblspc = a.tblspc
10 ;
TBLSPC ALLOC USED UNUSED USEDPCT
CARTEST_DATA 891289600 488701952 402587648 54,8308824
CARTEST_IDX 83886080 46465024 37421056 55,390625
CARTMPTEST_DATA 41943040 26935296 15007744 64,21875
RBS 541065216 104857600 436207616 19,379845
SYSTEM 471859200 373547008 98312192 79,1649306
TEMP 209715200 58654720 151060480 27,96875
6 rows selected.
SQL>
Joel P�rez -
Why Is "Export Small File Size" Large?
Hi, I work in CS3. Out of 100 brochures, I have two brochures that will not export to a small file size PDF. The settings for export are all the same. What else should I be looking for? The sizes are usually 1 MG. These particular 2 brochures are 5 - 6 MG. What can I look for to correct this?
ThanksSmall size is very dependent on the contents being exported to PDF. Images are readily compressed. Text and vector data isn't. Thus, if your content is primarily text and vector, it is very unlikely that the content can be compressed signficantly more via change of settings.
Another factor that can relate to image compression is use of any duotones that use spot colors. Such images cannot be JPEG-compressed, only ZIP-compressed.
- Dov -
DB02 table space overview display wrong sizes
Hi all
I get this result when I run the transaccion DB02 and table space overview
Tablespaces: Main data (Last analysis: 13.12.2009 04:00:29)
PSAPSR3 55,000.00 2,458.31
PSAPSR3700 44,980.00 2,128.50
PSAPUNDO 5,060.00 4,921.94
PSAPTEMP 2,000.00 2,000.00
SYSTEM 880 14.38
SYSAUX 340 20.75
PSAPSR3USR 20 14
Its displaying wrong informaction. The correct its the informatcion below and I get with a detailed analysis.
Tablespaces: Main data (Last analysis: 21.07.2010 13:04:51)
PSAPSR3 75,000.00 4,352.13
PSAPSR3700 44,980.00 1,634.44
PSAPUNDO 5,060.00 4,752.81
SYSTEM 880 10.25
SYSAUX 360 23.5
PSAPSR3USR 20 13.31
PSAPTEMP 0 0
I think is for the date of the last analysis, but I dont know how to update my table space overview analysis: i will appreciate it any one cant tell me how to update the analysis.
Best regards.
Edited by: Alvaro Olmos on Jul 21, 2010 8:16 PMTo refresh the table space overview do :
DB02 ->Refresh->there you will find two options choose second one
And it will refresh the tablespaces.
Or you can also schedule update stats job daily to update it automatically.
Regards,
Shivam -
Table space not reduce after delete in oracle 11G
Hi Team,
I have a DB 11.1.0.7 on unix.
I have execute delete tables on tablespace, but this not reduce.
Thanks935299 wrote:
What segment space management type is defined for the tablespace in question?
MANUAL
Then you should check out the documentation some more.
But even if you shrink the table segement what is that going to do for the data file size?
I don't undertand you.
ThanksYour thread is titled "Table space not reduce after delete in oracle 11G" which implies to me that you are interested in reducing the size of a tablespace (which really means reducing the size of the underlying datafile(s)).
So, if you shrink the size of the sys.aud$ table, will that cause the datafile(s) to become smaller? Will it accomplish your goal? What else, if anything, needs to happen? -
Dear All,
My client Export/Import Process failed while Importing at quality server. Production Server Export process completed successfully.
It looks like your UNDO tablespace is too small to handle a large volume transaction. Either increase your UNDO tablespace size or commit more often. Received this error at QAR Server and client import process failed.
Pl. guide how to Increase UNDO Table space or if any more solution to resolve my issue.
Regards,
Naik DipteshHi,
As per your error, it looks like that your Undo tablespace was full because of that your import failed. Undo tablespace you can increase from brtools in PSAPUNDO.
As you have not mentioned your database release. Please check below notes for undo management as per your release.:
Note 1035137 - Oracle Database 10g: Automatic Undo Retention
Note 600141 - Oracle9i: Automatic UNDO Management
Thanks
Sunny -
Pagination query help needed for large table - force a different index
I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
SELECT members.*
FROM members,
SELECT RID, rownum rnum
FROM
SELECT rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
WHERE rownum <= 100
WHERE rnum >= 1
and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
The problem I have is this:
SELECT rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
SELECT /*+ index(members, joindate_idx) */ rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
SELECT /*+ first_rows(100) */ rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
SELECT members.* -- Select all data from members table
FROM members, -- members table added to FROM clause
SELECT RID, rownum rnum
FROM
SELECT /*+ index(members, joindate_idx) */ rowid as RID -- Hint is ignored now that I am joining in the outer query
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
WHERE rownum <= 100
WHERE rnum >= 1
and RID = members.rowid -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
Thanks!Lakmal Rajapakse wrote:
OK here is an example to illustrate the advantage:
SQL> set autot traceonly
SQL> select * from (
2 select a.*, rownum x from
3 (
4 select a.* from aoswf.events a
5 order by EVENT_DATETIME
6 ) a
7 where rownum <= 1200
8 )
9 where x >= 1100
10 /
101 rows selected.
Execution Plan
Plan hash value: 3711662397
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 521K| 192 (0)| 00:00:03 |
|* 1 | VIEW | | 1200 | 521K| 192 (0)| 00:00:03 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 1200 | 506K| 192 (0)| 00:00:03 |
| 4 | TABLE ACCESS BY INDEX ROWID| EVENTS | 253M| 34G| 192 (0)| 00:00:03 |
| 5 | INDEX FULL SCAN | EVEN_IDX02 | 1200 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("X">=1100)
2 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
443 consistent gets
0 physical reads
0 redo size
25203 bytes sent via SQL*Net to client
281 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed
SQL>
SQL>
SQL> select * from aoswf.events a, (
2 select rid, rownum x from
3 (
4 select rowid rid from aoswf.events a
5 order by EVENT_DATETIME
6 ) a
7 where rownum <= 1200
8 ) b
9 where x >= 1100
10 and a.rowid = rid
11 /
101 rows selected.
Execution Plan
Plan hash value: 2308864810
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 201K| 261K (1)| 00:52:21 |
| 1 | NESTED LOOPS | | 1200 | 201K| 261K (1)| 00:52:21 |
|* 2 | VIEW | | 1200 | 30000 | 260K (1)| 00:52:06 |
|* 3 | COUNT STOPKEY | | | | | |
| 4 | VIEW | | 253M| 2895M| 260K (1)| 00:52:06 |
| 5 | INDEX FULL SCAN | EVEN_IDX02 | 253M| 4826M| 260K (1)| 00:52:06 |
| 6 | TABLE ACCESS BY USER ROWID| EVENTS | 1 | 147 | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("X">=1100)
3 - filter(ROWNUM<=1200)
Statistics
8 recursive calls
0 db block gets
117 consistent gets
0 physical reads
0 redo size
27539 bytes sent via SQL*Net to client
281 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed
Lakmal (and OP),
Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
SQL> select * from v$version ;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter pga
NAME TYPE VALUE
pga_aggregate_target big integer 103M
SQL> create table t nologging as select * from all_objects where 1 = 2 ;
Table created.
SQL> create index t_idx on t(last_ddl_time) nologging ;
Index created.
SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
40617 rows created.
SQL> commit ;
Commit complete.
SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
PL/SQL procedure successfully completed.
SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
OBJECT_ID OBJECT_NAME CREATED
47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
47672 ALL$OLAP2_CUBE_DIM_USES 28-JUL-2009 08:08:39
47681 ALL$OLAP2_CUBE_MEASURE_MAPS 28-JUL-2009 08:08:39
47682 ALL$OLAP2_FACT_LEVEL_USES 28-JUL-2009 08:08:39
47685 ALL$OLAP2_AGGREGATION_USES 28-JUL-2009 08:08:39
47692 ALL$OLAP2_CATALOGS 28-JUL-2009 08:08:39
47665 ALL$OLAPMR_FACTTBLKEYMAPS 28-JUL-2009 08:08:39
47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS 28-JUL-2009 08:08:39
47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS 28-JUL-2009 08:08:39
47669 ALL$OLAP9I2_HIER_DIMENSIONS 28-JUL-2009 08:08:39
47666 ALL$OLAP9I1_HIER_DIMENSIONS 28-JUL-2009 08:08:39
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> set autotrace traceonly
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
2 ;
11 rows selected.
Execution Plan
Plan hash value: 44968669
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 91200 | 180 (2)| 00:00:03 |
| 1 | SORT ORDER BY | | 1200 | 91200 | 180 (2)| 00:00:03 |
|* 2 | HASH JOIN | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 3 | VIEW | | 1200 | 30000 | 98 (0)| 00:00:02 |
|* 4 | COUNT STOPKEY | | | | | |
| 5 | VIEW | | 40617 | 475K| 98 (0)| 00:00:02 |
| 6 | INDEX FULL SCAN DESCENDING| T_IDX | 40617 | 793K| 98 (0)| 00:00:02 |
| 7 | TABLE ACCESS FULL | T | 40617 | 2022K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("T".ROWID="T1"."RID")
3 - filter("RN">=1190)
4 - filter(ROWNUM<=1200)
Statistics
1 recursive calls
0 db block gets
348 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
11 rows selected.
Execution Plan
Plan hash value: 882605040
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 1 | VIEW | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 40617 | 1546K| 80 (2)| 00:00:01 |
|* 4 | SORT ORDER BY STOPKEY| | 40617 | 2062K| 80 (2)| 00:00:01 |
| 5 | TABLE ACCESS FULL | T | 40617 | 2062K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RN">=1190)
2 - filter(ROWNUM<=1200)
4 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
343 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
11 rows selected.
Execution Plan
Plan hash value: 168880862
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 1 | HASH JOIN | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 2 | VIEW | | 1200 | 30000 | 98 (0)| 00:00:02 |
|* 3 | COUNT STOPKEY | | | | | |
| 4 | VIEW | | 40617 | 475K| 98 (0)| 00:00:02 |
| 5 | INDEX FULL SCAN DESCENDING| T_IDX | 40617 | 793K| 98 (0)| 00:00:02 |
| 6 | TABLE ACCESS FULL | T | 40617 | 2022K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("T".ROWID="T1"."RID")
2 - filter("RN">=1190)
3 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
349 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
11 rows selected.
Execution Plan
Plan hash value: 882605040
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 1 | VIEW | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 40617 | 1546K| 80 (2)| 00:00:01 |
|* 4 | SORT ORDER BY STOPKEY| | 40617 | 2062K| 80 (2)| 00:00:01 |
| 5 | TABLE ACCESS FULL | T | 40617 | 2062K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RN">=1190)
2 - filter(ROWNUM<=1200)
4 - filter(ROWNUM<=1200)
Statistics
175 recursive calls
0 db block gets
388 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> set autotrace off
SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query. -
I'm building a long technical report with Pages'08.
There are many tables in this report.
This document is in portrait format.
In the middle of this document I have a particularly large
table which can't be read if I try to stay on a portrait
presentation of this table: too many columns (15).
Hence I'd like to find an easy way to either rotate
the page or the table so as to be able to use larger
columns.
I discovered that Pages'08 doesn't permit to put one
page in landscape format. I also abandonned the idea of
using 3 different documents (part 1 in portrait,
part 2 in landscape, part 3 in portrait again).
I have to chain the paragraph numbers.
I have to make a table of content at the end of
this technical report.
What is the most efficient way to manage to fill this large
table.
Word let me do this, but unfortunately it does also
make me spend toomuch time on other simple and basic functions.
Is Pages'09 better on this basic and frequent need (at least for my job)?
<pre>--------
As long as you'll see students making graphics with pen on paper,
you'll see the missing keystone of the software empire.
dan</pre>Peggy wrote:
You can rotate a floating table, but it can be a problem if you need to edit the table. It will auto-rotate to portrait to edit it but it can be difficult to see or get to the outside edges. I find it easiest to copy & paste the table into a landscape document the copy it back after editing.
Thank you for the nice hint.
I finally choosed to work on a temporary document in A3 format,
and keep it open so as to be able to quickly copy my table in the
main document every time I update it.
During this copy operation I noticed a boring problem:
as the text column in my main document is slightly smaller than my table,
Pages decide to shrink it every time, and I can't recover it's
original size (which I penibly tuned up in my A3 document).
Hence all the cell contents are partially hidden.
The button:
Inspector > Metrics > Original Size
is off.
Do you know how to circumvent this bad habit of Pages to resize my
imported table?
<pre>--------
As long as you'll see students making graphics with pen on paper,
you'll see the missing keystone of the software empire.
dan</pre>
Maybe you are looking for
-
Error when scheduling job for Job Control LO
Hi all, I tried to schedule the job for collective update after I set the job parameter in tcode LBWE for LO Data Extraction. But they said 'Error generating job' without any description why. Maybe you can help me about this since I still can't do th
-
Airport wireless loosing connection after sleep
Hi, everyone. every time that my macbook comes back from sleep it does not recognize my network, and start looking for all the networks in my area, so in order to reconnect i have to turn off airport and turn on and then it automatically connect and
-
QUICK ACCESS TOOLBAR WONT WORK
I am not able to scroll thru the commands with FM7.2p158 in Windows 7 Pro. You can see the button go in, it just does not make the toolbar scroll to the next line of tools? please help. Any way to get around this? is/are there some other tool to get
-
Manual column resize in Mail is not permanent
Hello all, I am having an issue with mail and column sizes. I have my e-mail threaded and like knowing how many new messages in a thread there are. When I resize the column to reveal the number, not just the blue dot, the column inevitably reverts to
-
I have seen embedded applets all over the WEB. I am sure there is a way to get an applet to come off of a standalone after clicking a link on a web page. Can someone help me with this. Thanks in advance.