Select images for a book from a light table?
Hi,
the light table seems to be a good place to group pictures, set relative sizes and define some general ordering. Once I have done that using the light table, I want to create a book based on the light table. One way would be to create a new empty book and drag pictures from the light table to the pages of the book.
Is there a way to view a light table and the book's pages at the same time?
Regards,
Rainer Joswig
PowerMac Dual G5, 2.5Ghz Mac OS X (10.4.3)
No, you don't have to crop them. The "frames" on each page are set for the 4:3 size so that when a photo is put in a frame and enlarged to fill it all portions of the photo are still visible. You can but a 3:2 photo in a frame and it should enlarge it just so the long edge fill the frame leaving you with a little more white space.
There's a slide bar to enlarge if necessary. If you Control-click on the image, there will be an option to Fit photo to frame size. There you may cut some off. But you'll need to experiment to see if it will suit your needs.
Do you Twango?
Similar Messages
-
How to remove selected image from a light table?
According to the Aperture User Manual, when I select an image on a light table, I should be able to remove it (while still leaving it in the light table browser). However, this seems to work only for the image I most recently added to the light table -- when I select an image I added just before the most recent, the button at the upper left for "Light Table Put Back Selected" is grayed out. (Also, the command-shift-P shortcut shown in the tooltip doesn't do anything.) Am I misunderstanding the use of this button? Doing something wrong? I added some images, and decided I wanted to put a few of them back, but can't figure out a way to do that. Makes the light table pretty useless except for layout of a set of images you are already committed to. Doesn't seem possible.
Hey, thanks -- everything worked as you described. It never would have occurred to me that the "put back" button refers to the browser selection not the layout selection, for reasons I give below.
I think Aperture exhibits some conceptual confusion here. First of all, a "light table" is created by the New > Light Table command, and you add images to the newly created light table just as you would add them to an album. But those images don't go on the light table layout until you drag them from the browser to the layout table. The "put back" button and "remove from light table" contextual menu command both mean remove from the layout. The phrasing of the contextual menu command make it sounds as if it will remove the clicked image from the light table's collection of images, like removing an image from an album -- it would have made more sense to name this command "put back". Furthermore, the "put back" button is above the layout display, not above the browser, which to me implies that it applies to selections in the layout display. To select an image in the browser, and click "put back selected" seems backwards -- if you just selected the image in the browser, your attention is focused on the browser, not the layout; from the browser's point of view, the button should read "bring back" not "put back".
The documentation didn't really help when I was trying to figure this out. Sometimes the Aperture documentation refers to just a "light table", as on the overview on page 732. On pages 733 and 734, though, it refers to a "light table album". On page 735 of the Aperture manual there is a very brief explanation of how "to add images to the light table" and "to remove an image from the light table". Here, "light table" refers to the layout not the album. The explanation of the "Put Back Selected" button says to select an image then click the button, and the picture shown is of an image selected in the layout, not in the browser. -
iTunes quit unexpectedly during the download of a movie. When I click on that movie now I get the following message: "You have already purchased this item but it has not been downloaded. To download it, select Check For Available Downloads from the Store menu."
I follow the instructions, but the download still doesn't resume - err = 11111
Any advice?Yeah, I am going to blast my iPhone for the 4th time today and pray this goes away. I had 6 apps showing updates, the 6 badge icon shows on the App Store on my iPhone and, having run them all, the 6 apps have two copies each on my iPhone: one that is the old version of the app and will not run and the other is a faded version of the app with an empty progress bar on the icon. Click those and you get an error message about connecting to iTunes, etc. as described in the article I linked to above.
It gets stranger than this. If I install a completely NEW app, the 6 that are stuck mid-update heal themselves and appear with the new version only, however now completely different apps (Hold Em for instance - which I did not update and didn't need an update) will not launch "app name Cannot Launch" message comes up.
Agh! -
Query regarding the data type for fetcing records from multiple ODS tables
hey guys;
i have a query regarding the data type for fetcing records from multiple ODS tables.
if i have 2 table with a same column name then in the datatype under parent row node i cant add 2 nodes with the same name.
can any one help with some suggestion.Hi Mudit,
One option would be to go as mentioned by Padamja , prefxing the table name to the column name or another would be to use the AS keyoword in your SQL statement.
AS is used to rename the column name when data is being selected from your DB.
So, the query Select ename as empname from emptable will return the data with column name as empname.
Regards,
Bhavesh -
How to fetch data for a struture from a cluster table
How can I fetch data for a struture, from a cluster table, based on the name of the structure?
Hi,
In order to read from Cluster DB Table use the following statement:
Syntax
IMPORT <f1> [ TO < g1 > ] <f2> [TO < g2 >] ...
FROM DATABASE <dbtab>(<ar>)
[CLIENT <cli>] ID <key>|MAJOR-ID <maid> [MINOR-ID <miid>].
This statement reads the data objects specified in the list from a cluster in the database <dbtab>.
You must declare <dbtab> using a TABLES statement. If you do not use the TO <gi> option, the
data object <fi> in the database is assigned to the data object in the program with the same
name. If you do use the option, the data object <fi> is read from the database into the field <gi>.
For <ar>, enter the two-character area ID for the cluster in the database. The name <key>
identifies the data in the database. Its maximum length depends on the length of the name field
in <dbtab>.
The CLIENT <cli> option allows you to disable the automatic client handling of a client-specific cluster database, and specify the client yourself. The addition must always come directly after the name of the database.
For Eg:
PROGRAM SAPMZTS3.
TABLES INDX.
DATA: BEGIN OF JTAB OCCURS 100,
COL1 TYPE I,
COL2 TYPE I,
END OF JTAB.
IMPORT ITAB TO JTAB FROM DATABASE INDX(HK) ID 'Table'.
WRITE: / 'AEDAT:', INDX-AEDAT,
/ 'USERA:', INDX-USERA,
/ 'PGMID:', INDX-PGMID.
SKIP.
WRITE 'JTAB:'.
LOOP AT JTAB FROM 1 TO 5.
WRITE: / JTAB-COL1, JTAB-COL2.
ENDLOOP.
Regards,
Neha
Edited by: Neha Shukla on Mar 12, 2009 1:35 PM -
Asha 501 can't select image for contact list
Hi,
I cannot see any option to select the image for my contacts list. Is there any software available to select this..or need to wait for the future release of asha platform 1.
Appreciate your help in this regard.
Solved!
Go to Solution.Hi,
just navigate to the contact you want to assign a picture.
Slide in the menu bar from the bottom of the screen and press "edit".
The contact details appear and in the top-left corner are the initials of the contact and a small "+" sign.
Press here (the whole corner is ok, no need to aim for the +) and you can chose a picture from the gallery.
Best regards
MrOxx
+++
Please note: I work for Nokia, but all opinions posted here are my own.
+++
I will not answer PMs with product related questions. Please ask questions in the forum. Thank you.
+++ -
I cannot figure out how to get photos from iPhoto "albums" into a book I want to produce.
There's a selection of recently downloaded photos at the top of the window, and I can move these into the book, but do not see how to move those I put into and "album" for this purpose.To start a book make an album and put the photos for the book in it, select all and create the book - to add photos to a book taht is in process simply drag photos to the book in the source pane on the left
LN -
Query to select value for max date from a varchar field containing year and month
I'm having trouble selecting a value from a table that contain a varchar column in YYYY-MM format.
ex.
Emp_id Date Cost
10264 2013-01 5.00
33644 2013-12 84.00
10264 2013-02 12.00
33644 2012-01 680.0
59842 2014-05 57.00
In the sample data above, I would like to be able to select the for each Emp_id by the max date. Ex. For Emp_id 10264, the cost should be 12.00.create table test (Emp_id int, Date varchar(10), Cost decimal (6,2))
insert into test values(
10264, '2013-01', 5.00 ),
(33644, '2013-12', 84.00 ),
(10264, '2013-02', 12.00 ),
(33644, '2012-01', 680.0 ),
(59842, '2014-05', 57.00 )
Select Emp_id,[Date],Cost FROM (
select *,row_number() Over(Partition by Emp_id Order by Cast([Date]+'-1' as Datetime) Desc) rn
from test)
t
WHERE rn=1
drop table test -
Add new variable in Selection ID for planning book
Hello,
Need your suggestion-
We have requirement to add new variable -like SNP Planner in SNP Selection ID of Planning book. Please suggest steps to configure this
Regards
PanditPandit
There are more than one ways of doing this.
The best option is to create a freely definable attribute (Spro --- APO >> Master data >> Freely definable attributes) at material location level and now you will automatically be able to see the attribute when your create a selection id. But you have to make sure you add logic in your material cIF user exit or create a custom program in APO to populate the desired values for the freely definable attribute for all materials in the selection
Thanks
Saradha -
In Iowa, you can upload an app called Wilbur from your library so you can download books from the library. Is there another app that is needed to go with "Wilbur" because you cannot use the regular functions of an iPad such as taken notes, highlight a word for a definition, etc.
jodyfromclive wrote:
In Iowa, you can upload an app called Wilbur from your library so you can download books from the library.
Are you actually talking about the app called Overdrive Media Console? In any case, if it doesn't have the features you want, you need to ask the author to add them. -
Printout for Purchased books from itunes
Hi ,
I Purchase book from itunes... i see it downloaded to my iphone ibooks... i want to take a printout of the book as it is tough to read the book on small screen ...
Please let me know if can print out my purchased book.
Thanks,
ShravaniThat is correct.
ibooks are for ipod/ipad/iphone, NOT your computer. -
Hi All,
I need to modify the SQL for retrieving assets from assets tab and assets in merchandising in 10.1.2. I found the class for SQLQuery builder, but I want to change the SQL.
Could you anyone please how to solve this.
Thanks & Regards,
BalaHi All,
I need to modify the SQL for retrieving assets from assets tab and assets in merchandising in 10.1.2. I found the class for SQLQuery builder, but I want to change the SQL.
Could you anyone please how to solve this.
Thanks & Regards,
Bala -
How do I select a range of rows from an internal table in the debugger?
Hi,
I have a case where I wanted to delete a range of rows (several thousand) from an internal table using the debugger.
It seems that rows can only be selected one at a time by selecting (clicking) on the far left side of the row.
This is cumbersome, if not impossible when wishing to delete several thousand rows.
Other tools, such as Excel for example, allow for selecting a range of rows by selecting the first row and then holding the SHIFT key and selecting the last row and all rows in between will be selected.
I can't seem to find the combination of keys that will allow this in the table (or structure) tab of the debugger.
Is it possible to select a range of rows without having to select each row one at a time?
Thanks for your help,
AndyWhile it's a Table Control and should/could have a button to select all fields (or visible fields)...I don't think we can do it right now...I know it's a pain to select each row one at a time...but I don't we have any more options...
Greetings,
Blag. -
10g: delay for collecting results from parallel pipelined table functions
When parallel pipelined table functions are properly started and generate output record, there is a delay for the consuming main thread to gather these records.
This delay is huge compared with the run-time of the worker threads.
For my application it goes like this:
main thread timing efforts to start worker and collect their results:
[10:50:33-*10:50:49*]:JOMA: create (master): 015.93 sec (#66356 records, #4165/sec)
worker threads:
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.24 sec (#2449 EDRs, #467/sec, #0 errored / #6430 EBTMs, #1227/sec, #0 errored) - bulk #1 / sid #816
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.56 sec (#2543 EDRs, #457/sec, #0 errored / #6792 EBTMs, #1221/sec, #0 errored) - bulk #1 / sid #718
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.69 sec (#2610 EDRs, #459/sec, #0 errored / #6950 EBTMs, #1221/sec, #0 errored) - bulk #1 / sid #614
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.55 sec (#2548 EDRs, #459/sec, #0 errored / #6744 EBTMs, #1216/sec, #0 errored) - bulk #1 / sid #590
[10:50:34-*10:50:39*]:JOMA: create (slave) : 005.33 sec (#2461 EDRs, #462/sec, #0 errored / #6504 EBTMs, #1220/sec, #0 errored) - bulk #1 / sid #508
You can see, the worker threads are all started at the same time and terminating at the same time: 10:50:34-10:50:*39*.
But the main thread just invoking them and saving their results into a collection has finished at 10:50:*49*.
Why does it need #10 sec more just to save the data?
Here's a sample sqlplus script to demonstrate this:
--------------------------- snip -------------------------------------------------------
set serveroutput on;
drop table perf_data;
drop table test_table;
drop table tmp_test_table;
drop type ton_t;
drop type test_list;
drop type test_obj;
create table perf_data
sid number,
t1 timestamp with time zone,
t2 timestamp with time zone,
client varchar2(256)
create table test_table
a number(19,0),
b timestamp with time zone,
c varchar2(256)
create global temporary table tmp_test_table
a number(19,0),
b timestamp with time zone,
c varchar2(256)
create or replace type test_obj as object(
a number(19,0),
b timestamp with time zone,
c varchar2(256)
create or replace type test_list as table of test_obj;
create or replace type ton_t as table of number;
create or replace package test_pkg
as
type test_rec is record (
a number(19,0),
b timestamp with time zone,
c varchar2(256)
type test_tab is table of test_rec;
type test_cur is ref cursor return test_rec;
function TZDeltaToMilliseconds(
t1 in timestamp with time zone,
t2 in timestamp with time zone)
return pls_integer;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a));
end;
create or replace package body test_pkg
as
* Calculate timestamp with timezone difference
* in milliseconds
function TZDeltaToMilliseconds(
t1 in timestamp with time zone,
t2 in timestamp with time zone)
return pls_integer
is
begin
return (extract(hour from t2) - extract(hour from t1)) * 3600 * 1000
+ (extract(minute from t2) - extract(minute from t1)) * 60 * 1000
+ (extract(second from t2) - extract(second from t1)) * 1000;
end TZDeltaToMilliseconds;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a))
is
pragma autonomous_transaction;
sid number;
counter number(19,0) := 0;
myrec test_rec;
mytab test_tab;
mytab2 test_list := test_list();
t1 timestamp with time zone;
t2 timestamp with time zone;
begin
t1 := systimestamp;
select userenv('SID') into sid from dual;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
loop
fetch mycur into myRec;
exit when mycur%NOTFOUND;
mytab2.extend;
mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
end loop;
for i in mytab2.first..mytab2.last loop
-- attention: saves own SID in test_obj.a for indication to caller
-- how many sids have been involved
pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c)); -- duplicate
pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c)); -- duplicate once again
counter := counter + 1;
end loop;
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
commit;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
end;
end;
declare
myList test_list := test_list();
myList2 test_list := test_list();
sids ton_t := ton_t();
sid number;
t1 timestamp with time zone;
t2 timestamp with time zone;
procedure LogPerfTable
is
type ton is table of number;
type tot is table of timestamp with time zone;
type clients_t is table of varchar2(256);
sids ton;
t1s tot;
t2s tot;
clients clients_t;
deltaTime integer;
btsPerSecond number(19,0);
edrsPerSecond number(19,0);
begin
select sid, t1, t2, client bulk collect into sids, t1s, t2s, clients from perf_data order by client;
if clients.count > 0 then
for i in clients.FIRST .. clients.LAST loop
deltaTime := test_pkg.TZDeltaToMilliseconds(t1s(i), t2s(i));
if deltaTime = 0 then deltaTime := 1; end if;
dbms_output.put_line(
'[' || to_char(t1s(i), 'hh:mi:ss') ||
'-' || to_char(t2s(i), 'hh:mi:ss') ||
']:' ||
' client ' || clients(i) || ' / sid #' || sids(i)
end loop;
end if;
end LogPerfTable;
begin
select userenv('SID') into sid from dual;
for i in 1..200000 loop
myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
end loop;
-- save into the real table
insert into test_table select * from table(cast (myList as test_list));
-- save into the tmp table
insert into tmp_test_table select * from table(cast (myList as test_list));
dbms_output.put_line(chr(10) || '(1) copy ''mylist'' to ''mylist2'' by streaming via table function...');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from table(cast (myList as test_list)) tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(2) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(3) copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
end;
--------------------------- snap -------------------------------------------------------
best regards,
FrankHello
I think the delay you are seeing is down to choosing the partitioning method as HASH. When you specify anything other than ANY, an additional buffer sort is included in the execution plan...
create or replace package test_pkg
as
type test_rec is record (
a number(19,0),
b timestamp with time zone,
c varchar2(256)
type test_tab is table of test_rec;
type test_cur is ref cursor return test_rec;
function TZDeltaToMilliseconds(
t1 in timestamp with time zone,
t2 in timestamp with time zone)
return pls_integer;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a));
function TF_Any(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by ANY);
end;
create or replace package body test_pkg
as
* Calculate timestamp with timezone difference
* in milliseconds
function TZDeltaToMilliseconds(
t1 in timestamp with time zone,
t2 in timestamp with time zone)
return pls_integer
is
begin
return (extract(hour from t2) - extract(hour from t1)) * 3600 * 1000
+ (extract(minute from t2) - extract(minute from t1)) * 60 * 1000
+ (extract(second from t2) - extract(second from t1)) * 1000;
end TZDeltaToMilliseconds;
function TF(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by hash(a))
is
pragma autonomous_transaction;
sid number;
counter number(19,0) := 0;
myrec test_rec;
t1 timestamp with time zone;
t2 timestamp with time zone;
begin
t1 := systimestamp;
select userenv('SID') into sid from dual;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
loop
fetch mycur into myRec;
exit when mycur%NOTFOUND;
-- attention: saves own SID in test_obj.a for indication to caller
-- how many sids have been involved
pipe row(test_obj(sid, myRec.b, myRec.c));
pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate
pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate once again
counter := counter + 1;
end loop;
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
commit;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
end;
function TF_any(mycur test_cur)
return test_list pipelined
parallel_enable(partition mycur by ANY)
is
pragma autonomous_transaction;
sid number;
counter number(19,0) := 0;
myrec test_rec;
t1 timestamp with time zone;
t2 timestamp with time zone;
begin
t1 := systimestamp;
select userenv('SID') into sid from dual;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
loop
fetch mycur into myRec;
exit when mycur%NOTFOUND;
-- attention: saves own SID in test_obj.a for indication to caller
-- how many sids have been involved
pipe row(test_obj(sid, myRec.b, myRec.c));
pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate
pipe row(test_obj(sid, myRec.b, myRec.c)); -- duplicate once again
counter := counter + 1;
end loop;
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'slave');
commit;
dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
end;
end;
explain plan for
select /*+ first_rows */ test_obj(a, b, c)
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
select * from table(dbms_xplan.display);
Plan hash value: 1037943675
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 8168 | 3972K| 20 (0)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | 8168 | 3972K| 20 (0)| 00:00:01 | Q1,01 | P->S | QC (RAND) |
| 3 | BUFFER SORT | | 8168 | 3972K| | | Q1,01 | PCWP | |
| 4 | VIEW | | 8168 | 3972K| 20 (0)| 00:00:01 | Q1,01 | PCWP | |
| 5 | COLLECTION ITERATOR PICKLER FETCH| TF | | | | | Q1,01 | PCWP | |
| 6 | PX RECEIVE | | 931K| 140M| 136 (2)| 00:00:02 | Q1,01 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | PCWC | |
| 9 | TABLE ACCESS FULL | TEST_TABLE | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | PCWP | |
Note
- dynamic sampling used for this statement
explain plan for
select /*+ first_rows */ test_obj(a, b, c)
from table(test_pkg.TF_Any(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
select * from table(dbms_xplan.display);
Plan hash value: 4097140875
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 8168 | 3972K| 20 (0)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | 8168 | 3972K| 20 (0)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 3 | VIEW | | 8168 | 3972K| 20 (0)| 00:00:01 | Q1,00 | PCWP | |
| 4 | COLLECTION ITERATOR PICKLER FETCH| TF_ANY | | | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | PCWC | |
| 6 | TABLE ACCESS FULL | TEST_TABLE | 931K| 140M| 136 (2)| 00:00:02 | Q1,00 | PCWP | |
Note
- dynamic sampling used for this statementI posted about it here a few years ago and I more recently posted a question on Asktom. Unfortunately Tom was not able to find a technical reason for it to be there so I'm still a little in the dark as to why it is needed. The original question I posted is here:
Pipelined function partition by hash has extra sort#
I ran your tests with HASH vs ANY and the results are in line with the observations above....
declare
myList test_list := test_list();
myList2 test_list := test_list();
sids ton_t := ton_t();
sid number;
t1 timestamp with time zone;
t2 timestamp with time zone;
procedure LogPerfTable
is
type ton is table of number;
type tot is table of timestamp with time zone;
type clients_t is table of varchar2(256);
sids ton;
t1s tot;
t2s tot;
clients clients_t;
deltaTime integer;
btsPerSecond number(19,0);
edrsPerSecond number(19,0);
begin
select sid, t1, t2, client bulk collect into sids, t1s, t2s, clients from perf_data order by client;
if clients.count > 0 then
for i in clients.FIRST .. clients.LAST loop
deltaTime := test_pkg.TZDeltaToMilliseconds(t1s(i), t2s(i));
if deltaTime = 0 then deltaTime := 1; end if;
dbms_output.put_line(
'[' || to_char(t1s(i), 'hh:mi:ss') ||
'-' || to_char(t2s(i), 'hh:mi:ss') ||
']:' ||
' client ' || clients(i) || ' / sid #' || sids(i)
end loop;
end if;
end LogPerfTable;
begin
select userenv('SID') into sid from dual;
for i in 1..200000 loop
myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
end loop;
-- save into the real table
insert into test_table select * from table(cast (myList as test_list));
-- save into the tmp table
insert into tmp_test_table select * from table(cast (myList as test_list));
dbms_output.put_line(chr(10) || '(1) copy ''mylist'' to ''mylist2'' by streaming via table function...');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from table(cast (myList as test_list)) tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(2) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(3) copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(4) copy temporary ''tmp_test_table'' to ''mylist2'' by streaming via table function ANY:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF_any(CURSOR(select /*+ parallel(tab,5) */ * from tmp_test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
dbms_output.put_line(chr(10) || '(5) copy physical ''test_table'' to ''mylist2'' by streaming via table function using ANY:');
delete from perf_data;
t1 := systimestamp;
select /*+ first_rows */ test_obj(a, b, c) bulk collect into myList2
from table(test_pkg.TF_any(CURSOR(select /*+ parallel(tab,5) */ * from test_table tab)));
t2 := systimestamp;
insert into perf_data (sid, t1, t2, client) values(sid, t1, t2, 'master');
LogPerfTable;
dbms_output.put_line('... saved #' || myList2.count || ' records');
select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
end;
(1) copy 'mylist' to 'mylist2' by streaming via table function...
test_pkg.TF( sid => '918' ): enter
test_pkg.TF( sid => '918' ): exit, piped #200000 records
[01:40:19-01:40:29]: client master / sid #918
[01:40:19-01:40:29]: client slave / sid #918
... saved #600000 records
(2) copy temporary 'tmp_test_table' to 'mylist2' by streaming via table function:
[01:40:31-01:40:36]: client master / sid #918
[01:40:31-01:40:32]: client slave / sid #659
[01:40:31-01:40:32]: client slave / sid #880
[01:40:31-01:40:32]: client slave / sid #1045
[01:40:31-01:40:32]: client slave / sid #963
[01:40:31-01:40:32]: client slave / sid #712
... saved #600000 records
(3) copy physical 'test_table' to 'mylist2' by streaming via table function:
[01:40:37-01:41:05]: client master / sid #918
[01:40:37-01:40:42]: client slave / sid #738
[01:40:37-01:40:42]: client slave / sid #568
[01:40:37-01:40:42]: client slave / sid #618
[01:40:37-01:40:42]: client slave / sid #659
[01:40:37-01:40:42]: client slave / sid #963
... saved #3000000 records
(4) copy temporary 'tmp_test_table' to 'mylist2' by streaming via table function ANY:
[01:41:12-01:41:16]: client master / sid #918
[01:41:12-01:41:16]: client slave / sid #712
[01:41:12-01:41:16]: client slave / sid #1045
[01:41:12-01:41:16]: client slave / sid #681
[01:41:12-01:41:16]: client slave / sid #754
[01:41:12-01:41:16]: client slave / sid #880
... saved #600000 records
(5) copy physical 'test_table' to 'mylist2' by streaming via table function using ANY:
[01:41:18-01:41:38]: client master / sid #918
[01:41:18-01:41:38]: client slave / sid #681
[01:41:18-01:41:38]: client slave / sid #712
[01:41:18-01:41:38]: client slave / sid #754
[01:41:18-01:41:37]: client slave / sid #880
[01:41:18-01:41:38]: client slave / sid #1045
... saved #3000000 recordsHTH
David -
Logic for inserting values from Pl/sql table
Hi I'm using Forms 6i and db 10.2.0.1.0
I am reading an xml file using text_io, and extracting the contents to Pl/sql table.
Suppose, the xml
<?xml version="1.0" encoding="UTF-8" ?>
<XML>
<File name="S2_240463.201002170044.Z">
<BookingEnvelope>
<SenderID>KNPROD</SenderID>
<ReceiverID>NVOCC</ReceiverID>
<Password>TradingPartners</Password>
</BookingEnvelope>
</File>
</XML>From this xml, i'm extracting contents to a table of records, say bk_arr, which look like
Tag Val
File name S2_240463.201002170044.Z
SenderID KNPROD
ReceiverID NVOCC
Password TradingPartnersAnd now from this i've to insert into table, say bk_det .
The tag may come in different order, sometimes some additional tags may also come in between,
So i cannot access it sequentially and insert like
Insert into bk_det(file,sndr,rcvr,pswd) values(bk_arr(1).val,bk_arr(2).val....)
The tag name is constant ir for sender id, it will always be SenderID , not something like sndrid or sndid etc..
So if i've to insert to senderid column, then i've to match the tag = SenderID, and take the value at that index in the array.
How best i can do this?
ThanksI am referring to how you are parsing the XML - as you can extract values from the XML by element name. And as the name is known, it's associated value can be inserted easily.
Basic example:
SQL> with XML_DATA as(
2 select
3 xmltype(
4 '<?xml version="1.0" encoding="UTF-8" ?>
5 <XML>
6 <File name="S2_240463.201002170044.Z">
7 <BookingEnvelope>
8 <SenderID>KNPROD</SenderID>
9 <ReceiverID>NVOCC</ReceiverID>
10 <Password>TradingPartners</Password>
11 </BookingEnvelope>
12 </File>
13 </XML>' ) as XML_DOM
14 from dual
15 )
16 select
17 extractValue( xml_dom, '/XML/File/@name' ) as FILENAME,
18 extractValue( xml_dom, '/XML/File/BookingEnvelope/SenderID' ) as SENDER_ID,
19 extractValue( xml_dom, '/XML/File/BookingEnvelope/ReceiverID' ) as RECEIVER_ID,
20 extractValue( xml_dom, '/XML/File/BookingEnvelope/Password' ) as PASSWORD
21 from xml_data
22 /
FILENAME SENDER_ID RECEIVER_I PASSWORD
S2_240463.201002170044.Z KNPROD NVOCC TradingPartners
SQL> Now this approach can be used as follows:
create or replace procedure AddFile( xml varchar2 ) is
begin
insert into foo_files(
filename,
sender_id,
receiver_id,
password
with XML_DATA as(
select
xmltype( xml ) as XML_DOM
from dual
select
extractValue( xml_dom, '/XML/File/@name' ),
extractValue( xml_dom, '/XML/File/BookingEnvelope/SenderID' ),
extractValue( xml_dom, '/XML/File/BookingEnvelope/ReceiverID' ),
extractValue( xml_dom, '/XML/File/BookingEnvelope/Password' )
from xml_data;
end;
/No need for a fantasy called PL/SQL "+tables+".
Maybe you are looking for
-
I have an HP laptop, windows 7 64-bit OS. I had to open Explorer this AM to download a driver update (for audio on my computer, which instead eliminated the sound on my computer - still trying to fix that). I downloaded an update to DVDSoftVideo yest
-
Hi guys, Adobe Premiere CC 2014 user. I built a project, added markers and generally did my thing. When I came back from lunch, the following happens: 1. Premiere opens fine 2. When I select a file from my media library (just click on it once) I get
-
Hi i have an ipod apple shuffle it was stollen and i recovered it now i cant acces my old music,and all the details that were inside it,these person deleted my stuff and installed his please help
-
Running BI7 and BW 3.5 on the same system
Dear All, We have a BW 3.5 already running. Now the client wants to go for BI7 and wanted to evaluate the possibility of having BI7 and BW3.5 on the same server. If anybody come across this situation, please let me know the risks involved in having t
-
Printing No Longer Available Option Mac OS X Leopard Photoshop CS3
I run Photoshop CS3 on my Mac OS X Leopard. In the past few weeks, Photoshop will no longer let me print .psd files that I have created. The "Print..." option remains greyed out and the shortcut also will not work. Photoshop will still print modif