What is buffer sort?
Hi,
I m getting buffer sort in the explain plan.
please let me know what it is means
Thanks,
Kumar.
It means that oracle is caching some data from the row source into private memory in order to avoid having to read it multiple times.
Similar Messages
-
Buffer(sort) operator
Hi,
i'm trying to understand what "buffer sort" operation is in the following explain plan:
0 SELECT STATEMENT
-1 MERGE JOIN CARTESIAN
--2 TABLE ACCESS FULL PLAYS
--3 BUFFER SORT
---4 TABLE ACCESS FULL MOVIE
In Oracle 9i DataBase Performance Guide and Reference, "buffer sort" is not mentioned although all other explain plan's operations are.
What does it mean? Does it take place in main memory or is it an external sort?
Thank you.A BUFFER SORT typically means that Oracle reads data blocks into private memory,because the block will be accessed multiple times in the context of the SQL statement execution. in other words, Oracle sacrifies some extra memory to
reduce the overhead of accessing blocks multiple times in shared memory.
Hope this will clear your doubts.
Thanks. -
What types of sort performed by sort method of Array class ?
I use normal bubble sort and method Array.sort() to sort some given data of an Array and then count the time.But Array.sort() method takes more time then normal bubble sort.
Can anybody tell me what types of sort performed by sort method of Array class?I'm pretty sure that in eariler versions (1.2, 1.3
maybe?) List.sort's docs said it used quicksort. Or I
might be on crack.You are actually both correct, and wrong :)
From the documentation of the sort methods hasn't changed in 1.2 -> 1.4 (as far as I can notice), and the documentation for sort(Object[]) says (taken from JDK 1.2 docs):
"This sort is guaranteed to be stable: equal elements will not be reordered as a result of the sort.
The sorting algorithm is a modified mergesort (in which the merge is omitted if the highest element in the low sublist is less than the lowest element in the high sublist). This algorithm offers guaranteed n*log(n) performance, and can approach linear performance on nearly sorted lists."
So, how could you be correct? The documentation for e.g. sort(int[]) (and all other primities) says:
"Sorts the specified array of ints into ascending numerical order. The sorting algorithm is a tuned quicksort, adapted from Jon L. Bentley and M. Douglas McIlroy's "Engineering a Sort Function", Software-Practice and Experience, Vol. 23(11) P. 1249-1265 (November 1993). This algorithm offers n*log(n) performance on many data sets that cause other quicksorts to degrade to quadratic performance."
Your memory serves you well :)
/Kaj -
What is Buffer allowed but switched off?
Sir in table attributes What is Buffer allowed but switched off?
Hi Saurabh,
All objects (programs, tables, function modules, transaction codes etc.,) developed by anybody other than SAP will have to start with either a Y or a Z.
You will not be able to change the tables delivered by SAP unless you have a special permission from SAP.
Now, if you have got a standard table which is delivered by SAP (which does not begin with a Y or a Z), then either it would have the buffering attribute set or it is fully buffered. And this should generally not be changed.
But if there's a table that you have developed, then you can choose the buffering type as you wish.
I can't think of a reason why an SAP table would be delivered with the "buffering allowed but switched off" option. It could <i>probably</i> be so in cases where the table is recommended for generic-record buffering, but SAP wants you to decide which key should be used. This is a relatively remote scenario, but seems a <i>plausible</i> explanation. And almost in all such cases, there would be an SAP note for the same.
By the way, hvae you been able to find any standard table which has got this option for buffering? If yes, then please gimme the name of the table and I will try to find out more details..
Regards,
Anand Mandalika. -
I have a query that shows me that one of my tables is doing a Full Scan and a Buffer Sort in it, but I dont have any order by clause, or distinct or nothing... Why the Buffer Sort appears???
Thanks!Probably because the database needs to take an interim result set and put it in a certain sequenct in order to make the next phase of the query more efficient.
Something like
take the username and dept number from the emp table, sort in dept number sequence, then go to the dept table to get the department name. -
Long time on buffer sort with a insert and select through a dblink
I am doing a fairly simple "insert into select from" statement through a dblink, but something is going very wrong on the other side of the link. I am getting a huge buffer sort time in the explain plan (line 9) and I'm not sure why. When I try to run sql tuning on it from the other side of the dblink, I get an ora-600 error "ORA-24327: need explicit attach before authenticating a user".
Here is the original sql:
INSERT INTO PACE_IR_MOISTURE@PRODDMT00 (SCHEDULE_SEQ, LAB_SAMPLE_ID, HSN, SAMPLE_TYPE, MATRIX, SYSTEM_ID)
SELECT DISTINCT S.SCHEDULE_SEQ, PI.LAB_SAMPLE_ID, PI.HSN, SAM.SAMPLE_TYPE, SAM.MATRIX, :B1 FROM SCHEDULES S
JOIN PERMANENT_IDS PI ON PI.HSN = S.SCHEDULE_ID
JOIN SAMPLES SAM ON PI.HSN = SAM.HSN
JOIN PROJECT_SAMPLES PS ON PS.HSN = SAM.HSN
JOIN PROJECTS P ON PS.PROJECT_SEQ = PS.PROJECT_SEQ
WHERE S.PROC_CODE = 'DRY WEIGHT' AND S.ACTIVE_FLAG = 'C' AND S.COND_CODE = 'CH' AND P.WIP_STATUS IN ('WP','HO')
AND SAM.WIP_STATUS = 'WP';
Here is the sql as it appears on proddmt00:
INSERT INTO "PACE_IR_MOISTURE" ("SCHEDULE_SEQ","LAB_SAMPLE_ID","HSN","SAMPLE_TYPE","MATRIX","SYSTEM_ID")
SELECT DISTINCT "A6"."SCHEDULE_SEQ","A5"."LAB_SAMPLE_ID","A5"."HSN","A4"."SAMPLE_TYPE","A4"."MATRIX",:B1
FROM "SCHEDULES"@! "A6","PERMANENT_IDS"@! "A5","SAMPLES"@! "A4","PROJECT_SAMPLES"@! "A3","PROJECTS"@! "A2"
WHERE "A6"."PROC_CODE"='DRY WEIGHT' AND "A6"."ACTIVE_FLAG"='C' AND "A6"."COND_CODE"='CH' AND ("A2"."WIP_STATUS"='WP' OR "A2"."WIP_STATUS"='HO') AND "A4"."WIP_STATUS"='WP' AND "A3"."PROJECT_SEQ"="A3"."PROJECT_SEQ" AND "A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A4"."HSN" AND "A5"."HSN"="A6"."SCHEDULE_ID";
Here is the explain plan on proddmt00:
PLAN_TABLE_OUTPUT
SQL_ID cvgpfkhdhn835, child number 0
INSERT INTO "PACE_IR_MOISTURE" ("SCHEDULE_SEQ","LAB_SAMPLE_ID","HSN","SAMPLE_TYPE","MATRIX","SYSTEM_ID")
SELECT DISTINCT "A6"."SCHEDULE_SEQ","A5"."LAB_SAMPLE_ID","A5"."HSN","A4"."SAMPLE_TYPE","A4"."MATRIX",:B1
FROM "SCHEDULES"@! "A6","PERMANENT_IDS"@! "A5","SAMPLES"@! "A4","PROJECT_SAMPLES"@! "A3","PROJECTS"@! "A2"
WHERE "A6"."PROC_CODE"='DRY WEIGHT' AND "A6"."ACTIVE_FLAG"='C' AND "A6"."COND_CODE"='CH' AND
("A2"."WIP_STATUS"='WP' OR "A2"."WIP_STATUS"='HO') AND "A4"."WIP_STATUS"='WP' AND
"A3"."PROJECT_SEQ"="A3"."PROJECT_SEQ" AND "A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A4"."HSN" AND
"A5"."HSN"="A6"."SCHEDULE_ID"
Plan hash value: 3310593411
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | INSERT STATEMENT | | | | | 5426M(100)| | | |
| 1 | HASH UNIQUE | | 1210K| 118M| 262M| 5426M (3)|999:59:59 | | |
|* 2 | HASH JOIN | | 763G| 54T| 8152K| 4300M (1)|999:59:59 | | |
| 3 | REMOTE | | 231K| 5429K| | 3389 (2)| 00:00:41 | ! | R->S |
| 4 | MERGE JOIN CARTESIAN | | 1254G| 61T| | 1361M (74)|999:59:59 | | |
| 5 | MERGE JOIN CARTESIAN| | 3297K| 128M| | 22869 (5)| 00:04:35 | | |
| 6 | REMOTE | SCHEDULES | 79 | 3002 | | 75 (0)| 00:00:01 | ! | R->S |
| 7 | BUFFER SORT | | 41830 | 122K| | 22794 (5)| 00:04:34 | | |
| 8 | REMOTE | PROJECTS | 41830 | 122K| | 281 (2)| 00:00:04 | ! | R->S |
| 9 | BUFFER SORT | | 380K| 4828K| | 1361M (74)|999:59:59 | | |
| 10 | REMOTE | PROJECT_SAMPLES | 380K| 4828K| | 111 (0)| 00:00:02 | ! | R->S |
Predicate Information (identified by operation id):
2 - access("A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A6"."SCHEDULE_ID")Please use code tags... your formatted message is below:
From the looks of your explain plan... these entries :
Id Operation Name Rows Bytes TempSpc Cost (%CPU) Time Inst IN-OUT
4 MERGE JOIN CARTESIAN 1254G 61T 1361M (74) 999:59:59
5 MERGE JOIN CARTESIAN 3297K 128M 22869 (5) 00:04:35 Are causing extensive cpu processing, probably due to the cartesian join (includes sorting)... does "61T" mean 61 terabytes? Holy hell
From the looks of the explain plan these tables don't look partitioned.... can you confirm?
Why are you selecting distinct? If this is for ETL or data warehouse related procedure it ain't a good idea to use distinct... well ever... it's horrible for performance.
INSERT INTO PACE_IR_MOISTURE@PRODDMT00 (SCHEDULE_SEQ, LAB_SAMPLE_ID, HSN, SAMPLE_TYPE, MATRIX, SYSTEM_ID)
SELECT DISTINCT S.SCHEDULE_SEQ, PI.LAB_SAMPLE_ID, PI.HSN, SAM.SAMPLE_TYPE, SAM.MATRIX, :B1 FROM SCHEDULES S
JOIN PERMANENT_IDS PI ON PI.HSN = S.SCHEDULE_ID
JOIN SAMPLES SAM ON PI.HSN = SAM.HSN
JOIN PROJECT_SAMPLES PS ON PS.HSN = SAM.HSN
JOIN PROJECTS P ON PS.PROJECT_SEQ = PS.PROJECT_SEQ
WHERE S.PROC_CODE = 'DRY WEIGHT' AND S.ACTIVE_FLAG = 'C' AND S.COND_CODE = 'CH' AND P.WIP_STATUS IN ('WP','HO')
AND SAM.WIP_STATUS = 'WP';
Here is the sql as it appears on proddmt00:
INSERT INTO "PACE_IR_MOISTURE" ("SCHEDULE_SEQ","LAB_SAMPLE_ID","HSN","SAMPLE_TYPE","MATRIX","SYSTEM_ID")
SELECT DISTINCT "A6"."SCHEDULE_SEQ","A5"."LAB_SAMPLE_ID","A5"."HSN","A4"."SAMPLE_TYPE","A4"."MATRIX",:B1
FROM "SCHEDULES"@! "A6","PERMANENT_IDS"@! "A5","SAMPLES"@! "A4","PROJECT_SAMPLES"@! "A3","PROJECTS"@! "A2"
WHERE "A6"."PROC_CODE"='DRY WEIGHT' AND "A6"."ACTIVE_FLAG"='C' AND "A6"."COND_CODE"='CH' AND ("A2"."WIP_STATUS"='WP' OR "A2"."WIP_STATUS"='HO') AND "A4"."WIP_STATUS"='WP' AND "A3"."PROJECT_SEQ"="A3"."PROJECT_SEQ" AND "A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A4"."HSN" AND "A5"."HSN"="A6"."SCHEDULE_ID";
Here is the explain plan on proddmt00:
PLAN_TABLE_OUTPUT
SQL_ID cvgpfkhdhn835, child number 0
INSERT INTO "PACE_IR_MOISTURE" ("SCHEDULE_SEQ","LAB_SAMPLE_ID","HSN","SAMPLE_TYPE","MATRIX","SYSTEM_ID")
SELECT DISTINCT "A6"."SCHEDULE_SEQ","A5"."LAB_SAMPLE_ID","A5"."HSN","A4"."SAMPLE_TYPE","A4"."MATRIX",:B1
FROM "SCHEDULES"@! "A6","PERMANENT_IDS"@! "A5","SAMPLES"@! "A4","PROJECT_SAMPLES"@! "A3","PROJECTS"@! "A2"
WHERE "A6"."PROC_CODE"='DRY WEIGHT' AND "A6"."ACTIVE_FLAG"='C' AND "A6"."COND_CODE"='CH' AND
("A2"."WIP_STATUS"='WP' OR "A2"."WIP_STATUS"='HO') AND "A4"."WIP_STATUS"='WP' AND
"A3"."PROJECT_SEQ"="A3"."PROJECT_SEQ" AND "A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A4"."HSN" AND
"A5"."HSN"="A6"."SCHEDULE_ID"
Plan hash value: 3310593411
Id Operation Name Rows Bytes TempSpc Cost (%CPU) Time Inst IN-OUT
0 INSERT STATEMENT 5426M(100)
1 HASH UNIQUE 1210K 118M 262M 5426M (3) 999:59:59
* 2 HASH JOIN 763G 54T 8152K 4300M (1) 999:59:59
3 REMOTE 231K 5429K 3389 (2) 00:00:41 ! R->S
4 MERGE JOIN CARTESIAN 1254G 61T 1361M (74) 999:59:59
5 MERGE JOIN CARTESIAN 3297K 128M 22869 (5) 00:04:35
6 REMOTE SCHEDULES 79 3002 75 (0) 00:00:01 ! R->S
7 BUFFER SORT 41830 122K 22794 (5) 00:04:34
8 REMOTE PROJECTS 41830 122K 281 (2) 00:00:04 ! R->S
9 BUFFER SORT 380K 4828K 1361M (74) 999:59:59
10 REMOTE PROJECT_SAMPLES 380K 4828K 111 (0) 00:00:02 ! R->S
Predicate Information (identified by operation id):
2 - access("A3"."HSN"="A4"."HSN" AND "A5"."HSN"="A6"."SCHEDULE_ID")Edited by: TheDudeNJ on Oct 13, 2009 1:11 PM -
What causes BUFFER GETS and PHYSICAL READS in INSERT operation to be high?
Hi All,
Am performing a huge number of INSERTs to a newly installed Oracle XE 10.2.0.1.0 on Windows. There is no SELECT statement running, but just INSERTs one after the other of 550,000 in count. When I monitor the SESSION I/O from Home > Administration > Database Monitor > Sessions, I see the following stats:
BUFFER GETS = 1,550,560
CONSISTENT GETS = 512,036
PHYSICAL READS = 3,834
BLOCK CHANGES = 1,034,232
The presence of 2 stats confuses. Though the operation is just INSERT in database for this session, why should there be BUFFER GETS of this magnitude and why should there by PHYSICAL READS. Aren't these parameters for read operations? The BLOCK CHANGES value is clear as there are huge writes and the writes change these many blocks. Can any kind soul explain me what causes there parameters to show high value?
The total columns in the display table are as follows (from the link mentioned above)
1. Status
2. SID
3. Database Users
4. Command
5. Time
6. Block Gets
7. Consistent Gets
8. Physical Reads
9. Block Changes
10. Consistent Changes
What does CONSISTENT GETS and CONSISTENT CHANGES mean in a typical INSERT operation? And does someone know which all tables are involved in getting these values?
Thank,
...Flake wrote:
Hans, gracias.
The table just have 2 columns, both of which are varchar2 (500). No constraints, no indexes, neither foreign key references are in place. The total size of RAM in system is 1GB, and yes, there are other GUI's going on like Firefox browser, notepad and command terminals.
But, what does these other applications have to do with Oracle BUFFER GETS, PHYSICAL READS etc.? Awaiting your reply.Total RAM is 1GB. If you let XE decide how much RAM is to be allocated to buffers, on startup that needs to be shared with any/all other applications. Let's say that leaves us with, say 400M for the SGA + PGA.
PGA is used for internal stuff, such as sorting, which is also used in determing the layout of secondary facets such as indexes and uniqueness. Total PGA usage varies in size based on the number of connections and required operations.
And then there's the SGA. That needs to cover the space requirement for the data dictionary, any/all stored procedures and SQL statements being run, user security and so on. As well as the buffer blocks which represent the tablespace of the database. Since it is rare that the entire tablespace will fit into memory, stuff needs to be swapped in and out.
So - put too much space pressure on the poor operating system before starting the database, and the SGA may be squeezed. Put that space pressure on the system and you may enbd up with swapping or paging.
This is one of the reasons Oracle professionals will argue for dedicated machines to handle Oracle software. -
What's the sort order for podcasts on the shuffle?
Hi all... Can someone tell me how the iPod Shuffle orders podcasts? I've seen MANY posts saying there's no way to manually sort podcasts on the shuffle (which is crazy IMO), but at least if I know how it's ordering the podcasts that'll help me identify which one is next.
I added 15 podcasts to the shuffle, and sorting by every category I can find none that matches the order the shuffle places the podcasts in.
So first question -- what order are the podcasts stored in?
And second question -- Why the heck did Apple do this? Why can't podcasts just be treated like standard MP3 songs?
And finally if there's no answer to either of these, is there anyway to get ipodderx to sync to the ipod shuffle? I love itunes, but if it's going to hender my hardware I'll switch to something else.
Thanks for any help on this, and take care --
SamHi Dean,
I'm wondering if something's wrong with my podcast, because that's not what happens on mine. For example I have my podcasts in this order:
NPR Technology
Haunted New Jersey
Autumn in New England
Hometown Tales Podcast
And even after selecting File then Update iPod and even right-clicking on ipod and selecting Copy to Play Order the order of the podcasts are :
Haunted New Jersey
Hometown Tales Podcast
NPR Technology
Hometown Tale Podcast
Nothing I've done sorts the podcasts as they should be sort. Also i set it to not show the ipod unless it's connected, and when I sort the podcasts as I want, Update ipod, and plug ipod, plugging it back in shows the wacky sort order they're stored on the ipod. I can rearrange again, update again, eject and plug back in, and the crazy sort order (not mine) is back.
Is there something I'm missing????
Sam -
What percentage of sorting is done in Memory?
DB Version:10g Release 2
When you have a query with a large number of columns in the ORDER BY clause or a query without ORDER BY clause but with a larger number of columns in the SELECT list, the PGA_AGGREGATE_TARGET might not be sufficient to do this job. So, What % of PGA is set aside by Oracle for sorting? When this reserved % is reached in Memory(PGA_AGGREGATE_TARGET), temp tablespace is used,ie Disk sort. Right?J.Kiechle wrote:
DB Version:10g Release 2
When you have a query with a large number of columns in the ORDER BY clause or a query without ORDER BY clause but with a larger number of columns in the SELECT list, the PGA_AGGREGATE_TARGET might not be sufficient to do this job. So, What % of PGA is set aside by Oracle for sorting? When this reserved % is reached in Memory(PGA_AGGREGATE_TARGET), temp tablespace is used,ie Disk sort. Right?It's probably a bit different and a bit more complex, but in a nutshell the PGA_AGGREGATE_TARGET defines the upper limit that Oracle should use for the PGA areas of the processes used to run the database.
There is a non-tunable part that contributes to the PGA_AGGREGATE_TARGET, that can be significantly influenced e.g. by large PL/SQL collections or Java programs. The memory consumed by these can not be controlled by Oracle and therefore can't be reduced, but will be used in the overall calculation.
The tunable part consists of the SQL workareas that are used to sort, group or hash data as part of the SQL execution.
The value of PGA_AGGREGATE_TARGET determines several internal parameters, among them are pgamax_size, smmmax_size and smmpx_max_size. These internal parameters control the maximum amount of memory that can by used a single process (_pga_max_size), a serial operation resp. "workarea" (_smm_max_size) and the maximum memory available for the operation of a parallel slave in a parallel operation (_smm_px_max_size).
There is a significant difference between 10.2 and previous Oracle releases regarding these internal parameters:
In pre-10.2 databases pgamax_size defaults to 200M, and smmmax_size is the least of 5% of PGA_AGGREGATE_TARGET and 50% of pgamax_size, and 100M (if you set pgamax_size larger than the default value). The smmpx_max_size is 30% of PGA_AGGREGATE_TARGET and is divided by the parallel degree of the parallel operation to determine the upper limit of a workarea size of a single parallel slave together with smmmax_size.
In 10.2 the upper limits are driven by the smmmax_size which is derived from PGA_AGGREGATE_TARGET and can be larger than 100M if you have a PGA_AGGREGATE_TARGET greater than 1GB. The pgamax_size is then two times smmmax_size.
So in pre-10.2 databases the default maximum size of a single sort is 100M, provided you've set PGA_AGGREGATE_TARGET set to 2GB or greater, but a single process - that could have multiple workareas or sorts simultaneously - is not allowed to allocate more than 200MB in total.
In 10.2 and later you can have more than 100M for a single sort if you set your PGA_AGGREGATE_TARGET larger than 1GB, and a process can consume more than 200M in that case, too.
For more information about these parameters, see e.g. these two interesting notes:
http://christianbilien.wordpress.com/2007/05/01/two-useful-hidden-parameters-smmmax_size-and-pgamax-size/
http://www.jlcomp.demon.co.uk/untested.html
The amount of memory that remains after subtracting the non-tunable memory allocated from the PGA_AGGREGATE_TARGET and the number and size of concurrent tunable workareas determine the amount of memory available for newly established workareas, so that Oracle tries to do its best to allocate the available memory to all current workareas while at the same time attempts to stay below the PGA_AGGREGATE_TARGET. Obviously if many workareas are active concurrently the amount of memory available for each workarea will be less than the upper limits outlined above, down to a lower limit which is defined by the internal parameter smm_min_size (the greatest of 128k and 0.1% of PGA_AGGREGATE_TARGET).
Given these constraints it is possible that Oracle consumes more than the PGA_AGGREGATE_TARGET, eg. if the non-tunable part already takes a significant part of the PGA_AGGREGATE_TARGET. You can see this e.g. in V$PGASTAT if the "over allocation count" statistic value is > 0.
The cost based optimizer also uses the information derived from PGA_AGGREGATE_TARGET to calculate the cost of a sort resp. to estimate whether a sort will be in-memory or has to spill to disk.
There are various views available that allow you to monitor the workarea information, among them are V$PGASTAT for an overall information regarding the PGA consumption, V$PGA_TARGET_ADVICE, V$PGA_TARGET_ADVICE_HISTOGRAM and V$SQL_WORKAREA_HISTOGRAM, V$SQL_WORKAREA_ACTIVE and V$SQL_WORKAREA for monitoring individual workareas.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
What is the "sorting" area in the Get Info menu?
I would like to know what the sorting area does but I also have a specific question.
I bought the singles for an album, but now that the album came out, the singles and the rest of the album are grouped seperately. When I try to play songs of just that album on iTunes or on my iPhone, the singles are excluded. Can I use the "Sort album" field in the Get Info menu for the singles to group them with the album? Will I lose the album artwork of the singlesitunes?Thanks for the reply.
Strangely enough I didn't find any information on it in the help system, in the knowlegebase, or any of my Tiger books. Seems this was overlooked in just about everything. -
What do the "sort" options do in iTunes?
I've found lots of questions similar to the one I am asking, but none of them seem to explain in a straight-forward manner what the options do. Please could you tell me in the simplest way possible what the following options do.
Sort Artist.
Sort Album Artist.
Sort Album.
Sort Composer.
Thanks in advance!These options specify which field is used for determining the order in which music is displayed. These are 4 fields in the tagging of songs. The ones that are pretty much always set are:
Album: Album Name
Artist: Artist on *this* song
The other two are often left blank
Album Artist: This is a single artist name to keep tracks from albums with different artists on different tracks from being split up and displayed in different places. It is often set to "Various Artists" or maybe one artist name when some of the tracks have guest artists in the "Artist" field.
Composer: Generally used for classical music, where you might be more likely to care that Mozart composed a song rather than which symphony was playing it.
The default is to sort by for it to sort first by Album Artist (which will use "Artist" if "Album Artist" is blank) and then by "Album".
To find out the value of the fields, right-click->Get Info on the song in iTunes. -
What is buffer? how many types of buffers r there?
how many types of joins r there?
thank u.Whenever an Open SQL statement is used to read a record,
the data buffer is checked first to see whether it is there. If not, the data is read from the database.
If the table's attributes indicate that the data should be buffered,
the record is saved in RAM on the application server in data buffers.
Later, if that record is read again, it is read from the buffer instead of the database.
By buffering data, you increase performance in two important ways:
The programs using the buffered data run faster because they don't have to wait for it to come from the database. This reduces delays waiting for the database and the network that connects it.
The other programs that need to access the database run faster because there is less load on the database and less traffic on the network.
Three types of buffering are possible:
Full Buffering
With full buffering, either the entire table is in the buffer or the table is not in the buffer at all. All
the records of the table are loaded into the buffer when one record of the table is read.
In this example, a program reads the record highlighted in red from table
SCOUNTER. If the table is fully buffered, all the records of the table are loaded into
the buffer.
Application server
The buffered data records are sorted in the buffer by table key. Accesses to the
buffered data can therefore only analyze field contents up to the last specified key
field for restricting the dataset to be searched.
The left-justified part of the key should therefore be as large as possible in such
accesses. For example, if you do not define the first key field, the system has to scan
the full table. In this case direct access to the database can be more efficient if the
database has suitable secondary indexes [Page 61].
When Should you Use Full Buffering?
When deciding whether a table should be fully buffered, you should take into account the size of
the table, the number of read accesses, and the number of write accesses. Tables best suited to
full buffering are small, read frequently, and rarely written.
Full buffering is recommended in the following cases:
BC - ABAP Dictionary SAP AG
Full Buffering
36 December 1999
Tables up to 30 KB in size. If a table is accessed frequently, but all accesses are read
accesses, this value can be exceeded. However, you should always pay attention to the
buffer utilization.
Larger tables where large numbers of records are frequently accessed. If these mass
accesses can be formulated with a very selective WHERE condition using a database
index [Page 61], it could be better to dispense with buffering.
Tables for which accesses to non-existent records are frequently submitted. Since all the
table records reside in the buffer, the system can determine directly in the buffer whether
or not a record exists.
SAP AG BC - ABAP Dictionary
Generic Buffering
With generic buffering, all the records in the buffer whose generic key fields match this record are
loaded when one record of the table is accessed. The generic key is a part of the primary key of
the table that is left-justified.
In this example, the record highlighted in red is read by a program from table
SCOUNTER. If the table is generically buffered, all the records read whose generic
key fields (MANDT and CARRID) agree are loaded into the buffer.
Application server
When Should you Use Full Buffering?
A table should be buffered generically if only certain generic areas of the table are normally
needed for processing.
Client-specific, fully-buffered tables are automatically generically buffered since normally it is not
possible to work in all clients at the same time on an application server. The client field is the
generic key.
Language-specific tables are another example where generic buffering is recommended. In
general, only records of one language will be needed on an application server. In this case, the
generic key includes all the key fields up to and including the language field.
How Should you Define the Generic Key?
In generic buffering, it is crucial to define a suitable generic key.
BC - ABAP Dictionary SAP AG
Generic Buffering
If the generic key is too small, the buffer will contain a few very large areas. During access, too
much data might be loaded in the buffer.
If the generic key is too large, the buffer might contain too many small generic areas. These can
reduce buffer performance since there is an administrative entry for every buffered generic area.
It is also possible that too many accesses will bypass the buffer and go directly to the database,
since they do not fully define the generic key of the table. If there are only a few records in each
generic area, it is usually better to fully buffer the table.
Only 64 bytes of the generic key are used. You can specify a longer generic key, but the part of
the key exceeding 64 bytes is not used to create the generic areas.
Access to Buffered Data
It only makes sense to generically buffer a table if the table is accessed with fully-specified
generic key fields. If a field of the generic key is not assigned a value in a SELECT statement, it
is read directly from the database, bypassing the buffer.
If you access a generic area that is not in the buffer with a fully-specified generic key, you will
access the database to load the area. If the table does not contain any records in the specified
area ("No record found"), this area in the buffer is marked as non-existent. It is not necessary to
access the database if this area is needed again.
SAP AG BC - ABAP Dictionary
Single-Record Buffering
With single-record buffering, only the records that are actually read are loaded into the buffer.
Single-record buffering therefore requires less storage space in the buffer than generic and full
buffering. The administrative costs in the buffer, however, are greater than for generic or full
buffering. Considerably more database accesses are necessary to load the records than for the
other buffering types.
In this example, the record highlighted in red is read by a program from table
SCOUNTER. If single-record buffering is selected for the table, only the record that
was read is loaded into the buffer.
When Should you Use Single-Record Buffering?
Single-record buffering should be used particularly for large tables where only a few records are
accessed with SELECT SINGLE. The size of the records being accessed should be between 100
and 200 KB.
Full buffering is usually more suitable for smaller tables that are accessed frequently. This is
because only one database access is necessary to load such a table with full buffering, whereas
several database accesses are necessary for single-record buffering.
Access to Buffered Data
All accesses that are not submitted with SELECT SINGLE go directly to the database, bypassing
the buffer. This applies even if the complete key is specified in the SELECT statement.
BC - ABAP Dictionary SAP AG
Single-Record Buffering
40 December 1999
If you access a record which is not yet buffered with SELECT SINGLE, there is a database
access to load the record. This record is marked in the buffer as non-existent if the table does not
contain a record with the specified key. -
What's the sort order of ?
Just updated to IOS7 on my iPhone 4s - and now all my podcasts are in random order.
On the iTunes, they are all in alphabetical order by title. When synced to the iPhone, they are in some random order. How can I re-sort the podcasts in the iPhone? (This was not the case before update to IOS7.)
Thanks.Some context would be helpful. Please post the full SQL statement and full execution plan.
-Mark -
[SOLVED] What is this sorting algorithm? (or a new one?)
Hello everyone!
Just before starting, i apologize for my grammar mistakes.
I found a new sorting algorithm but i'm not sure if i really found it. There are too many sorting algorithms and mine is a really simple one; so, i belive that it can be found years ago.
I searched popular sorting algorithms, but none of the them is the answer.
Here is algorithm:
* Search the numbers between brackets
[24 12 12 55 64 18 32 31]
* Find smallest one
[24 12 12 55 64 18 32 31]
^S
* Swap the first item between brackets with smallest one
[12 12 24 55 64 18 32 31]
* Find largest one
[12 12 24 55 64 18 32 31]
^L
* Swap the last item between brackets with largest one
[12 12 24 55 31 18 32 64]
* Move brackets by one.
12[12 24 55 31 18 32]64
* Continue from step one until the array is sorted
/* rottsort
Copyright (c) 2013 Bora M. Alper
#include <stdio.h>
void print_array (const int *array, const int length);
int rottsort_swap (int *x, int *y);
void rottsort (int *array, const int length);
int rottsort_largest (const int *array, const int start, const int end);
int rottsort_smallest (const int *array, const int start, const int end);
void print_array (const int *array, const int length) {
int i;
for (i=0; i < length; ++i)
printf ("%d ", array[i]);
putchar ('\n');
int main (void) {
int array[] = {24, 12, 12, 55, 64, 18, 32, 31};
print_array(array, 8);
rottsort(array, 8);
print_array(array, 8);
return 0;
int rottsort_swap (int *x, int *y) {
const int temp = *x;
*x = *y;
*y = temp;
void rottsort (int *array, const int length) {
int i, largest_pos, smallest_pos;
for (i=0; i < length/2; ++i) {
largest_pos = rottsort_largest(array, i, length-1-i);
rottsort_swap(&(array[largest_pos]), &(array[length-1-i]));
smallest_pos = rottsort_smallest(array, i, length-1-i);
rottsort_swap(&(array[smallest_pos]), &(array[i]));
int rottsort_largest (const int *array, const int start, const int end) {
int i, largest_pos = start;
for (i=start; i <= end; ++i)
if (array[i] >= array[largest_pos])
largest_pos = i;
return largest_pos;
int rottsort_smallest (const int *array, const int start, const int end) {
int i, smallest_pos = start;
for (i=start; i <= end; ++i)
if (array[i] <= array[smallest_pos])
smallest_pos = i;
return smallest_pos;
P.S.: If this is a new sorting algorithm, i name it as "rottsort". :)
Last edited by boraalper4 (2013-08-11 19:08:17)Trilby wrote:
Because you already have two variables for largets and smallest, there is no reason to loop through the whole list twice to get each. Loop through the list (or list subset) once, and in each loop check if the current item is smaller than smallest_pos or larger than largest_pos.
This will increase efficiency by a factor of two.
As written I believe it'd be less efficient than even a simple bubble sort. With the above revision it may be comparable to a bubble sort.
Thanks for quick answer and advice. :) I will try to do that. When i'm done, i will post the new code.
Code is tested on codepad. (I edited the code on my phone so, sorry for formatting)
/* rottsort
Copyright (c) 2013 Bora M. Alper
#include <stdio.h>
void print_array (const int *array, const int length);
int rottsort_swap (int *x, int *y);
void rottsort (int *array, const int length);
void rottsort_find (int *smallest_pos, int *largest_pos, const int *array, const int start, const int end);
void print_array (const int *array, const int length) {
int i;
for (i=0; i < length; ++i)
printf ("%d ", array[i]);
putchar ('\n');
int main (void) {
int array[] = {24, 12, 12, 55, 64, 18, 32, 31};
print_array(array, 8);
rottsort(array, 8);
print_array(array, 8);
return 0;
int rottsort_swap (int *x, int *y) {
const int temp = *x;
*x = *y;
*y = temp;
void rottsort (int *array, const int length) {
int i, largest_pos, smallest_pos;
for (i=0; i < length/2; ++i) {
rottsort_find (&smallest_pos, &largest_pos, array, i, length-1-i);
rottsort_swap(&(array[largest_pos]), &(array[length-1-i]));
if (smallest_pos == length-1-i)
smallest_pos = largest_pos;
rottsort_swap(&(array[smallest_pos]), &(array[i]));
void rottsort_find (int *smallest_pos, int *largest_pos, const int *array, const int start, const int end) {
int i;
*smallest_pos = start;
*largest_pos = start;
for (i=start; i <= end; ++i) {
if (array[i] >= array[*largest_pos])
*largest_pos = i;
if (array[i] <= array[*smallest_pos])
*smallest_pos = i;
Last edited by boraalper4 (2013-08-11 15:21:48) -
What does "Apply Sort Field" mean?
I click on +same album+ but it seems nothing happened?
Say that you decide that you want to have your David Bowie tracks sorted under B rather than D. Select any David Bowie track, change the sort artist to Bowie or *Bowie, David* and then use *Apply Sort Field > Same Artist* to have this change replicated to every David Bowie track in your library.
tt2
Maybe you are looking for
-
Multiple ATVs for one Itunes account
Trying to find out if I can use, say, 4 ATVs to stream the exact song or show from one computer (one itunes account) to multiple locations? Thanks!
-
Hi, everybody! Moved to CS6 (Design & Web Premium, Win7 64-bit) but I'm not happy with Acrobat X Pro--I need my comments on the bottom of Acrobat window and not on the right side. Can I uninstall only Acrobat X Pro from CS6 Design & Web Premium colle
-
IMovie - Can't drag and drop photos or transitions
Everything was working fine. I was creating movies and loving it. I then ran out of hard drive space and did some cleaning up. I removed old files and cleaned up iMovie. Since then I can't seem to create a new movie. I can't rename the title, drag an
-
Error "no ocijdbc11 in java.library.path" in SQL Developer 4
Hi! I have just installed the latest version of Oracle 12c and the latest SQL Developer (4.0.0.12). I have also JDK 7u25 installed. I run everything on Windows 7 64bit. When I create a local connection in SQL Developer and test or try to connect I ge
-
Need help with setting up a new iMac
I tried to use Migration Assistant to set up my new iMac. I transferred data from PPB G5 10.5.8 but MA hung at "less than one min. remaining" so I closed the window. Is there any way to erase what has been transferred and start over again with transf