INSERT with MAX
Inside a store procedure, want to batch insert records with MAX(SEQ_NO)+1
LiST_ID..ITEM..SEQ_NO..ID
1........xx....12......xx (current)
1........xx....13......xx
1........xx....14......xx (expected new)
INSERT INTO FUN_LIST_ITEM
SELECT IN_LIST_ID, to_number(token),
NVL(MAX(SEQ_NO),0)+1, SEQ_FUN_LINK.NEXTVAL
FROM FUN_LIST_ITEM
WHERE LIST_ID=IN_LIST_ID;
java.sql.SQLException: ORA-20008:
ORA-02287: sequence number not allowed here
Hi,
Try this.
INSERT INTO fun_list_item
SELECT IN_LIST_ID
, to_number(token)
, (SELECT NVL(MAX(SEQ_NO),0)+1 FROM FUN_LIST_ITEM)
, SEQ_FUN_LINK.NEXTVAL
FROM FUN_LIST_ITEM
WHERE LIST_ID=IN_LIST_ID;
Similar Messages
-
Multiple Inserts with max+1
HI.
I am trying to insert 20k rows into a table that already has 80k rows. The primary key of the column should be max+1 for each insert. There is no sequence created for the same. Any idea how to do it. the below does not work.
INSERT INTO TBL1
(SELECT (SELECT MAX(COL_key)+1 FROM TBL1),B,C FROM TBL2);
All the 20k records that are inserted have same col_key value because col_key is not updated on the fly to generate new max+1.
Thanks.>
is there any other option then using sequence???
>
Sure! You can make it really slow and non-scalable by keeping the MAX value in a table and serializing every access to the table by locking the row with the MAX value, incrementing the value and then unlocking the row with a COMMIT.
This has the added advantage of using PL/SQL to make the performance even worse.
If you then combine these methods by calling the PL/SQL function in your SQL query to get the next value you can double the number of context switches that you do and slow things down a little more.
SELECT myPackage.myNextVal() from DUAL;That will give you the poor performance you are seeking, works in a multi-user environment and avoids those pesky sequences. -
What is new with MAX 2.0 and is it compatible with Session Manager?
We added non-IVI instrument information in, basically the same structure as for IVI instruments,
into the ivi.ini file to keep all instrument information in the same place. Using MAX Version 1.1 caused no problems whatsoever and the system worked fine. With the advent of MAX 2.0 you seem to use ivi.ini as well as config.mxs to store instrument information. What we have found now is that given a working ivi.ini file from MAX 1.1, we end up with 2 or 3 copies of all the devices in the IVI Instruments->Devices section! When the duplicate entries are deleted and the application exited, the
ivi.ini file is updated minus the [Hardware->] sections which contain the resource descriptors that our appl
ications look for. As an added complication, under MAX 2.1 (From an evaluation of the Switch Executive) It behaves the same, except that it almost always crashes with one of the following errors. 'OLEChannelWnd Fatal Error', or 'Error#26 window.cpp line 10028 Labview Version 6.0.2' Once opened and closed MAX 2.1 will not open again! (Note we do not have LabVIEW on the system.) What is the relationship between the config.mxs and ivi.ini now? Also, your Session Manager application (for use with TestStand) extracts information from ivi.ini and may expect entries to be manually entered into ivi.ini (e.g. NISessionManager_Reset = True) i.e. Is the TestStand Session Manager compatible with MAX 2.0?Brian,
The primary difference between MAX 1.1 and 2.x is that there is a new internal architecture. MAX 2.x synchronizes data between the config.mxs and the ivi.ini. The reason you're having trouble is that user-editing of the ivi.ini file is not supported with MAX 2.x.
Some better solutions to accomplish what you want:
1. Do as Mark Ireton suggested in his answer
2. Use the IVI Run-Time Configuration functions. They will allow you to dynamically configure your Logical Names, Virtual Instruments, Instrument Drivers, and Devices. You can then use your own format for storing and retrieving that information, and use the relevant pieces for each execution. You can find information on these functions in the IVI CVI Help file located in Start >> National I
nstruments >> IVI Driver Toolset folder. Go to the chapter on Run-time Initialization Configuration.
I strongly suggest #2, because those functions will continue to be supported in the future, while other mechanisms may not be.
--Bankim
Bankim Tejani
National Instruments -
I have not been able to use iTunes for several months. Every time I open iTunes, it freezes by computer such that there is about a minutes between each action. I am running iTunes 11 on Mac OS 10.6.8 and have a computer with maxed out memory. Help! I can't access my iTunes content.
I have not been able to use iTunes for several months. Every time I open iTunes, it freezes by computer such that there is about a minutes between each action. I am running iTunes 11 on Mac OS 10.6.8 and have a computer with maxed out memory. Help! I can't access my iTunes content.
-
Multi-table INSERT with PARALLEL hint on 2 node RAC
Multi-table INSERT statement with parallelism set to 5, works fine and spawns multiple parallel
servers to execute. Its just that it sticks on to only one instance of a 2 node RAC. The code I
used is what is given below.
create table t1 ( x int );
create table t2 ( x int );
insert /*+ APPEND parallel(t1,5) parallel (t2,5) */
when (dummy='X') then into t1(x) values (y)
when (dummy='Y') then into t2(x) values (y)
select dummy, 1 y from dual;
I can see multiple sessions using the below query, but on only one instance only. This happens not
only for the above statement but also for a statement where real time table(as in table with more
than 20 million records) are used.
select p.server_name,ps.sid,ps.qcsid,ps.inst_id,ps.qcinst_id,degree,req_degree,
sql.sql_text
from Gv$px_process p, Gv$sql sql, Gv$session s , gv$px_session ps
WHERE p.sid = s.sid
and p.serial# = s.serial#
and p.sid = ps.sid
and p.serial# = ps.serial#
and s.sql_address = sql.address
and s.sql_hash_value = sql.hash_value
and qcsid=945
Won't parallel servers be spawned across instances for multi-table insert with parallelism on RAC?
Thanks,
MaheshPlease take a look at these 2 articles below
http://christianbilien.wordpress.com/2007/09/12/strategies-for-rac-inter-instance-parallelized-queries-part-12/
http://christianbilien.wordpress.com/2007/09/14/strategies-for-parallelized-queries-across-rac-instances-part-22/
thanks
http://swervedba.wordpress.com -
Is anyone having problems with a Matrox Mini with Max with OSX10.8.1. mine just does not work.
I didn't sin, but you did by assuming things that are not true.
1/ The project file is on the same drive as the export, BUT, the source files are on a different capture drive than the export to C.
2/ I've been using the Matrox RTX100 with previous versions of Premiere for 7 years, and the two have worked very well together all that time, and still are working very well, and, will continue to I've no doubt. - Just because you couldn't get it to work, does not mean they don't.
3/ The export I refer to did not have any connection to the Matrox box. All Premiere and Encore.
4/ I didn't mention that I had done a lot of successful export tests of various definition formats, to other formats, and including a 1 hour Full HD export, no problems.
So, - there - nyah ! -
My CLOB insert with PreparedStatements WORKS but is SLOOOOWWW
Hi All,
I am working on an application which copies over a MySQL database
to an Oracle database.
I got the code to work including connection pooling, threads and
PreparedStatements. For tables with CLOBs in them, I go through the
extra process of inserting the CLOBs according to Oracle norm, i.e.
getting locator and then writing to that:
http://www.oracle.com/technology/sample_code/tech/java/sqlj_jdbc/files/advanced/LOBSample/LOBSample.java.html (Good Example for BLOBs, CLOBs)
However, for tables with such CLOBs, I only get a Record per second insert
of about 1/sec!!! Tables without CLOBs (and thus, without the round-about-way)
of inserting CLOBs are approx. 10/sec!!
How can I improve the speed of my clob inserts / improve the code? At the moment, for
a table of 30,000 records (with CLOBs) it takes about 30,000 which is 8 hours!!!!
Here is my working code, which is run when my application notices that the table has
CLOBs. The record has already been inserted with all non-clob fields and the "EMPTY_BLOB()"
blank for the CLOB. The code then selects that row (the one just inserted), gets a handle on the
EMPTY_BLOB location and writes the my CLOB content (over 4000 characters) to that handles
and then closes the handle. At the very end, I do conn.commit().
Any tips for improving speed?
conn.setAutoCommit(false);
* This first section is Pseudo-Code. The actual code is pretty straight
* forward. (1) I create the preparedStatement, (2) I go record by record
* - for each record, I (a) loop through each column and run the corresponding
* setXXX to set the preparedStatement parameters, (b) run
* preparedStatement.executeUpdate(), and (c) if CLOB is present, run below
* actual code.
* During insertion of the record (executeUpdate), if I notice that
* a Clob needs to be inserted, I insert a "EMPTY_CLOB()" placeholder and set
* the flag "clobInTheHouse" to true. Once the record is inserted, if "clobInTheHouse"
* is actually "true," I go to the below code to insert the Blob into that
* newly created record's "EMPTY_CLOB()" placeholder
// clobSelect = "SELECT * FROM tableName WHERE "uniqueRecord" LIKE '1'
// I create the above for each record I insert and have this special uniqueRecord value to
// identify what record that is so I can get it below. clobInTheHouse is true when, where I
// insert the records, I find that there is a CLOB that needs to be inserted.
if(clobInTheHouse){
ResultSet lobDetails = stmt.executeQuery(clobSelect);
ResultSetMetaData rsmd = lobDetails.getMetaData();
if(lobDetails.next()){
for(int i = 1; i <= rsmd.getColumnCount(); i++){
// if column name matches clob name, then go and do the rest
if(clobs.contains(rsmd.getColumnName(i))){
Clob theClob = lobDetails.getClob(i);
Writer clobWriter = ((oracle.sql.CLOB)theClob).getCharacterOutputStream();
StringReader clobReader = new StringReader((String) clobHash.get(rsmd.getColumnName(i)));
char[] cbuffer = new char[30* 1024]; // Buffer to hold chunks of data to be written to Clob, the slob
int nread = 0;
try{
while((nread=clobReader.read(cbuffer)) != -1){
clobWriter.write(cbuffer,0,nread);
}catch(IOException ioe){
System.out.println("E: clobWriter exception - " + ioe.toString());
}finally{
try{
clobReader.close();
clobWriter.close();
//System.out.println(" Clob-slob entered for " + tableName);
}catch(IOException ioe2){
System.out.println("E: clobWriter close exception - " + ioe2.toString());
try{
stmt.close();
}catch(SQLException sqle2){
conn.commit();Can you use insert .. returning .. so you do not have to select the empty_clob back out.
[I have a similar problem but I do not know the primary key to select on, I am really looking for an atomic insert and fill clob mechanism, somone said you can create a clob fill it and use that in the insert, but I have not seen an example yet.] -
Hi,
Can we write Insert with 'Where' clause? I'm looking for something similar to the below one (which is giving me an error)
insert into PS_AUDIT_OUT (AUDIT_ID, EMPLID, RECNAME, FIELDNAME, MATCHVAL, ERRORMSG)
Values ('1','10000139','NAMES','FIRST_NAME',';','')
Where AUDIT_ID IN
( select AUDIT_ID from PS_AUDIT_FLD where AUDIT_ID ='1' and RECNAME ='NAMES'
AND FIELDNAME = 'FIRST_NAME' AND MATCHVAL = ';' );
Thanks
DuraiIt is not clear what are you trying to do, but it looks like:
insert
into PS_AUDIT_OUT(
AUDIT_ID,
EMPLID,
RECNAME,
FIELDNAME,
MATCHVAL,
ERRORMSG
select '1',
'10000139',
'NAMES',
'FIRST_NAME',
from PS_AUDIT_FLD
where AUDIT_ID = '1'
and RECNAME ='NAMES'
and FIELDNAME = 'FIRST_NAME'
and MATCHVAL = ';'
SY. -
Can somebody explain to me how to use the SQLstatement INSERT
with dremweaver 8.
Before i should SELECT i should be able to INSERT details
from a .asp into 'i my case' database.mdb.
please can somebody explain to me how Dreamweaver 8 goes over
this issue and how to do it
Thanks in advancehttp://www.aspwebpro.com/aspscripts/records/insertnew.asp
Something like this?
Dan Mode
--> Adobe Community Expert
*Flash Helps*
http://www.smithmediafusion.com/blog/?cat=11
*THE online Radio*
http://www.tornadostream.com
*Must Read*
http://www.smithmediafusion.com/blog
"thebold" <[email protected]> wrote in
message
news:eptack$nmb$[email protected]..
> Can somebody explain to me how to use the SQLstatement
INSERT with
> dremweaver 8.
> Before i should SELECT i should be able to INSERT
details from a .asp into
> 'i
> my case' database.mdb.
> please can somebody explain to me how Dreamweaver 8 goes
over this issue
> and
> how to do it
>
> Thanks in advance
> -
Performance of insert with spatial index
I'm writing a test that inserts (using OCI) 10,000 2D point geometries (gtype=2001) into a table with a single SDO_GEOMETRY column. I wrote the code doing the insert before setting up the index on the spatial column, thus I was aware of the insert speed (almost instantaneous) without a spatial index (with layer_gtype=POINT), and noticed immediately the performance drop with the index (> 10 seconds).
Here's the raw timing data of 3 runs in each 3 configuration (the clock ticks every 14 or 15 or 16 ms, thus the zero when it completes before the next tick):
truncate execute commit
no spatial index 0.016 0.171 0.016
no spatial index 0.031 0.172 0.000
no spatial index 0.031 0.204 0.000
index (1000 default for batch size) 0.141 10.937 1.547
index (1000 default for batch size) 0.094 11.125 1.531
index (1000 default for batch size) 0.094 10.937 1.610
index SDO_DML_BATCH_SIZE=10000 0.203 11.234 0.359
index SDO_DML_BATCH_SIZE=10000 0.094 10.828 0.344
index SDO_DML_BATCH_SIZE=10000 0.078 10.844 0.359As you can see, I played with SDO_DML_BATCH_SIZE to change the default of 1,000 to 10,000, which does improve the commit speed a bit from 1.5s to 0.35s (pretty good when you only look at these numbers...), but the shocking part of the almost 11s the inserts are now taking, compared to 0.2s without an index: that's a 50x drop in peformance!!!
I've looked at my table in SQL Developer, and it has no triggers associated, although there has to be something to mark the index as dirty so that it updates itself on commit.
So where is coming the huge overhead during the insert???
(by insert I mean the time OCIStmtExecute takes to run the array-bind of 10,000 points. It's exactly the same code with or without an index).
Can anyone explain the 50x insert performance drop?
Any suggestion on how to improve the performance of this scenario?
To provide another data point, creating the index itself on a populated table (with the same 10,000 points) takes less than 1 second, which is consistent with the commit speeds I'm seeing, and thus puzzles me all the more regarding this 10s insert overhead...
SQL> set timing on
SQL> select count(*) from within_point_distance_tab;
COUNT(*)
10000
Elapsed: 00:00:00.01
SQL> CREATE INDEX with6CDF1526$point$idx
2 ON within_point_distance_tab(point)
3 INDEXTYPE IS MDSYS.SPATIAL_INDEX
4 PARAMETERS ('layer_gtype=POINT');
Index created.
Elapsed: 00:00:00.96
SQL> drop index WITH6CDF1526$POINT$IDX force;
Index dropped.
Elapsed: 00:00:00.57
SQL> CREATE INDEX with6CDF1526$point$idx
2 ON within_point_distance_tab(point)
3 INDEXTYPE IS MDSYS.SPATIAL_INDEX
4 PARAMETERS ('layer_gtype=POINT SDO_DML_BATCH_SIZE=10000');
Index created.
Elapsed: 00:00:00.98
SQL>Thanks for your input. We are likely to use partioning down the line, but what you are describing (partition exchange) is currently beyond my abilities in plain SQL, and how this could be accomplished from an OCI client application without affecting other users and keep the transaction boundaries sounds far from trivial. (i.e. can it made transparent to the client application, and does it require privileges the client does have???). I'll have to investigate this further though, and this technique sounds like one accessible to a DBA only, not from a plain client app with non-privileged credentials.
The thing that I fail to understand though, despite your explanation, is why the slow down is not entirely on the commit. After all, documentation for the SDO_DML_BATCH_SIZE parameter of the Spatial index implies that the index is updated on commit only, where new rows are fed 1,000 or 10,000 at a time to the indexing engine, and I do see time being spent during commit, but it's the geometry insert that slow down the most, and that to me looks quite strange.
It's so much slower that it's as if each geometry was indexed one at a time, when I'm doing a single insert with an array bind (i.e. equivalent to a bulk operation in PL/SQL), and if so much time is spend during the insert, then why is any time spent during the commit. In my opinion it's one or the other, but not both. What am I missing? --DD -
How to insert with select in table with object types
I am in the proces of redesigning some tables, as i have upgraded from
personal oracle 7 to personal oracle 8i.
I have constructed an object type Address_type, which is one of the columns
in a table named DestTable.
The object type is created as follows:
CREATE OR REPLACE TYPE pub.address_type
AS OBJECT
Street1 varchar2(50),
Street2 varchar2(50),
ZipCode varchar2(10));
The table is created as follows:
CREATE TABLE pub.DestTable
(id INTEGER PRIMARY KEY,
LastName varchar2(30),
FirstName varchar2(25),
Address pub.address_type);
Inserting a single row is ok as i use the following syntax:
Insert into DestTable(1, '******* ', 'Lawrence', pub.address_type(
'500 Oracle Parkway', 'Box 59510', '95045'));
When i try to insert values into the table by selecting from another table i
cannot do it and cannot figure out what is wrong
I have used the following syntax:
Insert into DestTable
id, name, pub.address_type(Street1, Street2, ZipCode))
select
id, lastname, firstname, street1, street2, ZipCode
from SourceTable;
I have also tried the following syntax:
Insert into DestTable
id, name, pub.address_type(Address.Street1, Address.Street2,Address.ZipCode))
select
id, lastname, firstname, street1, street2, ZipCode
from SourceTable;
What is wrong here ?
nullMagnus,
1. Check out the examples on 'insert with subquery' in http://otn.oracle.com/docs/products/oracle8i/doc_library/817_doc/server.817/a85397/state21b.htm#2065648
2. Correct your syntax that repeated the column definition after "insert into ..."
Insert into DestTable
id, name, pub.address_type(Street1, Street2, ZipCode))
select
id, lastname, firstname, street1, street2, ZipCode
from SourceTable;
Regards,
Geoff
null -
Installation of CE 7.1 on windows with Max DB
Hi All,
I have installed CE 7.1 on windows with Max DB.
After successful installtion, it is not getting started. After chekcing the log files, found that that DB connection could not be established.
I am not find the Max DB icon in the system tray. I am not sure if the Max DB is running ok.
How can we check that and how can i make it work.
Regards
SuneelHi.
I have tried the same and on Windows for some strange reason the MaxDB do not start automatic. You might try to go to ControlPanel -> services, select the MaxDB service and make it autostart.
Then you need to alter the profiles files and enter a line like "autostart = 1". I do not have the exact location right now, but can find it tomorrow.
BR
Poul. -
hi all,
I want to get max changenr from cdhdr for each and every object id . if i write for all entries iam getting an error . how can i get the list with max change number for each and every material?
SELECT * FROM CDPOS INTO TABLE T_CDPOS FOR ALL
ENTRIES IN T_ZWPBPH WHERE OBJECTCLAS ='MATERIAL'
AND OBJECTID = T_ZWPBPH-PBPINO AND FNAME = 'SPART'.
SELECT * FROM CDHDR INTO TABLE T_CDHDR FOR ALL
ENTRIES IN T_CDPOS WHERE OBJECTCLAS ='MATERIAL' AND
OBJECTID = T_CDPOS-OBJECTID AND CHANGENR = T_CDPOS-CHANGENR => this statement
Message was edited by:
priya katragaddacan this help u out ..
SELECT CDHDROBJECTID CDHDROBJECTCLASS
<b> MAX( CDHDR~CHNAGENR )</b> INTO (V_ID, V_OBJCLASS , V_NUMBER)
FROM
CDHDR INNER JOIN CDPOS AS A ON
"COMMON KEYS
ON SCUSTOMID = SBOOKCUSTOMID
WHERE
WHERE OBJECTCLAS ='MATERIAL'
AND OBJECTID = T_ZWPBPH-PBPINO AND FNAME = 'SPART'
GROUP BY CDHDROBJECTID CDHDROBJECTCLASS
ORDER BY CDHDR~OBJECTID .
WRITE: / V_ID, V_OBJCLASS , V_NUMBER.
ENDSELECT.
regards,
vijay -
Multi table insert with error logging
Hello,
Can anyone please post an example of a multitable insert with an error logging clause?
Thank you,Please assume that I check the documentation before asking a question in the forums. Well, apparently you had not.
From docs in mention:
multi_table_insert:
{ ALL insert_into_clause
[ values_clause ] [error_logging_clause]
[ insert_into_clause
[ values_clause ] [error_logging_clause]
| conditional_insert_clause
subqueryRegards
Peter -
pdf file is too big (7MB). Need a pdf file with max size 4MB. How to minimize the size of a pdf file?
The other alternative is the Save As Other>Reduce File Size. The PDF Optimizer gives you more control. You might also use the audit feature in the PDF Optimizer to see where the size problem is coming from. You might find that in the WORD file you can select and image and then go to the FORMAT (pictures) menu and select compress for all of the bitmaps in the file. That will typically use 150 dpi by default that is adequate for most needs.
Maybe you are looking for
-
DIscount condition for plant to plant STO
Hi, Please let me know if this requirement is possible in the system. My scenario is when I am transferring stock from plant to plant (both under same company code), I get a subsidy. So when I receive the goods in the receiving plant the price has to
-
Hi All, I am trying to do the client proxy scenario, my scenario is to fetch the material data from R/3 and place it in a file. from material master i am taking 4 fields data matnr,mbrsh,mtart,meins and created a program to fetch the data for those f
-
OS X and Windows home directory
Greetings, I have a small network that I would like to consolidate running Panter server 10.3 and a mix of OSX 10.4 and XP machines. All of the machines authenticate to my server. However, the PC users store their data in the /Users/Profiles/ are tha
-
Hi Experts, I got to send larg XML files (+20MB) to the Message listener for processing. Would you please comment if it would be a good approach or suggest some tested and authentic solution for this purpose. Prompt response will be highly appreciate
-
Web services and e print not working on 7520e
After the update it instructed me to turn off my printer so the update could take effect which I did but my printer would not turn back on. leaving it overnight it was the same the next day so I had to unplug it from the wall outlet. I lost my intern