Need an efficient way to write history record
I need to keep the old images of the records in table A after any changes made by the user. So, I create a history table B which is exactly the same as A but have two more
columns to store the the SYSDATE & USER.
Currently, my program uses a cursor to loop thru records in A in order to insert every record to B with SYSDATE & USER and then delete the record in A.
Any better method to deal with this ?
null
Hi,
You can write a UPDATE trigger on A to write the record to B.
Similar Messages
-
Need an efficient way of address comparisons
I am trying to find out a way to compare 2 addresses
e.g. 6841 Day Drive with 6841 Day Dr.. In this example Drive and Dr needs to be matched somehow
I read some forums which say split the text and then try to match the numerical portion and then street name.
Now imagine there are thousands of records to be processed. Definitely the above process is practically useless when processing so many records
Can anyone help me with giving some ideas how to implement that efficiently.
Thanks in advance,
MandeepJoachimSauer wrote:
mandy_m wrote:
I call it practically useless to split the string and do text comparisons considering the fact that there are many ways in which a user might write the address.
I gave an example of Drive - Dr, others may be Road - Rd, Street - St, Aprtment -Apt..... and the list is very longThat's why I said you need to canonicalize the data. Write a method (or class) that canonicalizes any given address. Then apply that to all data that you store (i.e. all your existing data). This might take some time, but only needs to be done once.Joachim's defnitely giving you good advice. If you have no control over the data coming in, this is definitely not a simple job; if you do, you could require people to put the 'type' of street and the building number separately from the street name (and don't forget that building numbers can be ranges if you're dealing with corporate addresses).
In England, a lot of people give names to their houses, which further muddies the waters as it is often the first line in the address (the same could be true of a corporate address: Company House, 200-250 Suchandsuch Rd....etc).
A few things to try (some have already been suggested):
1. Create an Address class based on the components that you know you want. At the very least, it should have 'equals()', 'hashCode()' and 'toString()' implemented.
2. Convert the input address to lowercase (or uppercase).
3. Create a dictionary of known street types (road, avenue, boulevard...etc). This could be a static HashSet in your Address class.
4. Create a list of known abbreviations (apt, rd, dr...). I'd suggest you include versions with and without punctuation.
5. Expand all abbreviations in your input address.
6. Remove all remaining punctuation, but keep the original line orientation (this might involve splitting your address string into 'lines' based on commas for example).
7. Look for a line that contains one of your known 'street types'. This is highly likely to be the actual street address.
8. Extract the number (or numbers, in the case of a range). Watch out for things like '221b baker street'.
9. Load the resulting components into your Address variables.
The fact of the matter is that even with all of the above, you will probably not have a 100% solution. Personally, I'd reject all input addresses that are so badly written that you can't determine required components for your class, but I'm an ornery cuss...
Apart from that, you could check out the postal service website where you live to see if they have any guidelines for parsing addresses. The GPO here used to have a great page for that.
Good luck. What you want is not simple.
Winston -
Most efficient way to write 4 bytes at the start of any file.
Quick question: I want to write 4 bytes at the start of a file without overriding the current bytes in the file. E.g. push bytes 0-4 along... Is my only option writing the bytes into a new file then writing the rest of the file after? RAF is so close but overrides :(.
Thanks MelI revised the code to use a max of 8MB buffers for both the nio and stdio copies...
Looks like NIO is a pretty clear winner... but your milage may vary, lots... you'd need to test this 100's of times, and normalize, to get any "real" metrics... and I for one couldn't be bothered... it's one of those things that's "fast enough"... 7 seconds to copy a 250 MB file to/from the same physical disk is pretty-effin-awesome really, isn't it? ... looks like Vista must be one of those O/S's (mentioned in the API doco) which can channel from a-to-b without going through the VM.
... and BTW, it took the program which produced this file 11,416 millis to write it (from an int-array (i.e. all in memory)).
revised code
package forums;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.nio.channels.FileChannel;
class NioBenchmark1
private static final double NANOS = Math.pow(10,9);
private static final int BUFF_SIZE = 8 * 1024 * 1024; // 8
interface Copier {
public void copy(File source, File dest) throws IOException;
static class NioCopier implements Copier {
public void copy(File source, File dest) throws IOException {
FileChannel in = null;
FileChannel out = null;
try {
in = (new FileInputStream(source)).getChannel();
out = (new FileOutputStream(dest)).getChannel();
final int buff_size = Math.min((int)source.length(),BUFF_SIZE);
long n = -1;
int pos = 0;
while ( (n=in.transferTo(pos, buff_size, out)) == buff_size ) {
pos += n;
} finally {
if(in != null) in.close();
if(out != null) out.close();
static class NioCopier2 implements Copier {
public void copy(File source, File dest) throws IOException {
if ( !dest.exists() ) {
dest.createNewFile();
FileChannel in = null;
FileChannel out = null;
try {
in = new FileInputStream(source).getChannel();
out = new FileOutputStream(dest).getChannel();
final int buff_size = Math.min((int)in.size(),BUFF_SIZE);
long n = -1;
int pos = 0;
while ( (n=out.transferFrom(in, 0, buff_size)) == buff_size ) {
pos += n;
} finally {
if(in != null) in.close();
if(out != null) out.close();
static class IoCopier implements Copier {
private byte[] buffer = new byte[BUFF_SIZE];
public void copy(File source, File dest) throws IOException {
InputStream in = null;
FileOutputStream out = null;
try {
in = new FileInputStream(source);
out = new FileOutputStream(dest);
int count = -1;
while ( (count=in.read(buffer)) != -1) {
out.write(buffer, 0, count);
} finally {
if(in != null) in.close();
if(out != null) out.close();
public static void main(String[] arg) {
final String filename = "SieveOfEratosthenesTest.txt";
//final String filename = "PrimeTester_SieveOfPrometheuzz.txt";
final File src = new File(filename);
System.out.println("copying "+filename+" "+src.length()+" bytes");
final File dest = new File(filename+".bak");
try {
time(new IoCopier(), src, dest);
time(new NioCopier(), src, dest);
time(new NioCopier2(), src, dest);
} catch (Exception e) {
e.printStackTrace();
private static void time(Copier copier, File src, File dest) throws IOException {
System.gc();
try{Thread.sleep(1);}catch(InterruptedException e){}
dest.delete();
long start = System.nanoTime();
copier.copy(src, dest);
long stop = System.nanoTime();
System.out.println(copier.getClass().getName()+" took "+((stop-start)/NANOS)+" seconds");
output
C:\Java\home\src\forums>"C:\Program Files\Java\jdk1.6.0_12\bin\java.exe" -Xms512m -Xmx1536m -enableassertions -cp C:\Java\home\classes forums.NioBenchmark1
copying SieveOfEratosthenesTest.txt 259678795 bytes
forums.NioBenchmark1$IoCopier took 14.333866455 seconds
forums.NioBenchmark1$NioCopier took 7.712665715 seconds
forums.NioBenchmark1$NioCopier2 took 6.206867074 seconds
Press any key to continue . . .Having said that... The NIO has lost a fair bit of it's charm... testing transferTo's return value and maintaining your own position in the file is "cumbsome" (IMHO)... I'm not even certain that mine is completely correct (?n+=pos or n+=pos+1?).... hmmm..
Cheers. Keiths. -
Efficient way to write to file
Hi !
i'm trying to wfite a big amount of lines into a file.
the insertion of my string is inside a while.
I'm using a
BufferedWriter out = new BufferedWriter(
new OutputStreamWriter(
new FileOutputStream(
new File("file.txt))))But it seems that is uses a lot of mem and it does not write to the file unless it get out of the while statement... Any suggestions???
Thank youWell let me give you a sample of my code.
I 'm connecting to a DataBase and getting some values
rs1 = statement1.executeQuery(query);
while(rs1.next())
//doing something
String a=....
String b=....
String c=....
String temp=a.concat(" :: ").concat(b).concat(" :: ").concat(c)
out.write(temp);
out.newLine();
out.close();You think a flush could work or should i do smthng else? -
MDX - More efficient way?
Hi
I am still learning MDX and have written this code. It needs to recalculate all employees in a cost center (COSTCENTER is a property of the DIM EMPLOYEE) when one of the assumptions (e.g P00205 etc) change. These assumptions are planned on cost center level and planned against employee DUMMY. Is there a more efficient way to write this code as there are lots of accounts that needs to be posted to::
*SELECT (%EMPLOYEE%, ID, EMPLOYEE, [COSTCENTER] = %COSTCENTER_SET%)
//Workmens Comp
*XDIM_MEMBERSET P_ACCT = "IKR0000642000"
*FOR %EMP% = %EMPLOYEE%
[EMPLOYEE].[#%EMP%] = ( [P_ACCT].[P00205],[EMPLOYEE].[DUMMY ]) * ( [P_ACCT].[P00400],[EMPLOYEE].[%EMP%] )
*NEXT
*COMMIT
//Fringe Benefits Employer
*XDIM_MEMBERSET P_ACCT = "IKR0000628100"
*FOR %EMP% = %EMPLOYEE%
[EMPLOYEE].[#%EMP%] = ( [P_ACCT].[P00210],[EMPLOYEE].[DUMMY ]) * ( [P_ACCT].[P00400],[EMPLOYEE].[%EMP%] )
*NEXT
*COMMIT
//Fringe Benefits Other
*XDIM_MEMBERSET P_ACCT = "IKR0000626100"
*FOR %EMP% = %EMPLOYEE%
[EMPLOYEE].[#%EMP%] = ( [P_ACCT].[P00209],[EMPLOYEE].[DUMMY ]) * ( [P_ACCT].[P00400],[EMPLOYEE].[%EMP%] )
*NEXT
*COMMITMaybe the following?
*SELECT (%EMPLOYEE%, ID, EMPLOYEE, [COSTCENTER] = %COSTCENTER_SET%)
*XDIM_MEMBERSET EMPLOYEE = %EMPLOYEE%
*XDIM_MEMBERSET P_ACCT = IKR0000642000,IKR0000628100,IKR0000626100
//Workmens Comp
[P_ACCT].[#IKR0000642000] = ( [P_ACCT].[P00205],[EMPLOYEE].[DUMMY] ) * ( [P_ACCT].[P00400] )
//Fringe Benefits Employer
[P_ACCT].[#IKR0000628100] = ( [P_ACCT].[P00210],[EMPLOYEE].[DUMMY] ) * ( [P_ACCT].[P00400] )
//Fringe Benefits Other
[P_ACCT].[#IKR0000626100] = ( [P_ACCT].[P00209],[EMPLOYEE].[DUMMY] ) * ( [P_ACCT].[P00400] )
*COMMIT
You should probably also restrict explicitly on all other dimensions in your applications so that none are accidentally left open that don't need to be.
Ethan -
More cost efficient way??
Any suggestions as to a more efficient way to write the following statement?
SELECT 'P22-RANGE-1 ', PSU, CD.PARID, CD.LASTUPDATEDATE,
PR.VEHICLEID P01,
PR.OCCNUMBER P02,
PR.PERSONTYPEID P03,
NM.STRIKEVEHICLEID P22
FROM NASS.PARDATA PAR,
GES.CRASHDATA CD,
GES.PERSON PR,
GES.NONMOTORIST NM
WHERE PAR.PARID=CD.PARID AND
CD.PARID=PR.PARID AND
PR.PARID=NM.PARID (+) AND
PR.VEHICLEID=NM.VEHICLEID (+) AND
PR.OCCUPANTID=NM.OCCUPANTID (+) AND
((PR.PERSONTYPEID IN (26706,26707,26708,26709,26710) AND
(NM.STRIKEVEHICLEID<1 OR
NM.STRIKEVEHICLEID IS NULL)) OR
(PR.PERSONTYPEID IN (26704,26705,26711) AND
NM.STRIKEVEHICLEID>0))
ORDER BY 1,2,3,4,5,6;I would try this
SELECT 'P22-RANGE-1 ', PSU, CD.PARID, CD.LASTUPDATEDATE,
PR.VEHICLEID P01,
PR.OCCNUMBER P02,
PR.PERSONTYPEID P03,
NM.STRIKEVEHICLEID P22
FROM NASS.PARDATA PAR,
GES.CRASHDATA CD,
GES.PERSON PR,
GES.NONMOTORIST NM
WHERE PAR.PARID=CD.PARID AND
CD.PARID=PR.PARID AND
PR.PARID=NM.PARID (+) AND
PR.VEHICLEID=NM.VEHICLEID (+) AND
PR.OCCUPANTID=NM.OCCUPANTID (+) AND
PR.PERSONTYPEID BETWEEN 26706 AND 26710 AND
NVL(NM.STRIKEVEHICLEID,0)<1
UNION ALL
SELECT 'P22-RANGE-1 ', PSU, CD.PARID, CD.LASTUPDATEDATE,
PR.VEHICLEID P01,
PR.OCCNUMBER P02,
PR.PERSONTYPEID P03,
NM.STRIKEVEHICLEID P22
FROM NASS.PARDATA PAR,
GES.CRASHDATA CD,
GES.PERSON PR,
GES.NONMOTORIST NM
WHERE PAR.PARID=CD.PARID AND
CD.PARID=PR.PARID AND
PR.PARID=NM.PARID (+) AND
PR.VEHICLEID=NM.VEHICLEID (+) AND
PR.OCCUPANTID=NM.OCCUPANTID (+) AND
PR.PERSONTYPEID BETWEEN 26704 AND 26705 AND
NM.STRIKEVEHICLEID>0
UNION ALL
SELECT 'P22-RANGE-1 ', PSU, CD.PARID, CD.LASTUPDATEDATE,
PR.VEHICLEID P01,
PR.OCCNUMBER P02,
PR.PERSONTYPEID P03,
NM.STRIKEVEHICLEID P22
FROM NASS.PARDATA PAR,
GES.CRASHDATA CD,
GES.PERSON PR,
GES.NONMOTORIST NM
WHERE PAR.PARID=CD.PARID AND
CD.PARID=PR.PARID AND
PR.PARID=NM.PARID (+) AND
PR.VEHICLEID=NM.VEHICLEID (+) AND
PR.OCCUPANTID=NM.OCCUPANTID (+) AND
PR.PERSONTYPEID = 26711 AND
NM.STRIKEVEHICLEID>0
ORDER BY 1,2,3,4,5,6; -
Advice needed: Efficient way to scan users
Hi all,
I wish to know the efficient way to scan users in Lighthouse. I need to write a workflow that checkout all the users and perform some updates. This workflow should run everyday at midnight.
I have created a scanner myself. Basically what It did are:
1. call FormUtils.getUsers method to return all users' name into a variable.
2. loop through this list and call a subprocess workflow to process every user. This subprocess checks out a user view, performs updates, and then checks in view.
This solution is not efficient at all since it causes my JVM to be Out of Memory. (1G RAM assigned to JVM with about 78,000 users)
Any advice is highly appreciated. Thank you.
SteveOk...I know understand what you are doing and why you need this.
A long, long, long time ago (back in 3.x days) the deferred task scanner was really bad. Its nightly scan would scan ALL users each time. This is fine when your client had 4k users...but not when it has 140k users.
Additionally, the "set deferred task" function had problems with two tasks with the same name "i.e. disable resource" since it used the name as the xml object name which can not be duplicated.
soooo, to beat this I rewrote the deferred task handler to allow me to do all of this. Part of this was to add a searchable field called 'nextTaskDate' on the user object. After each workflow this 'date" is updated so it is always correctly populated with the users "next deferred task date"
each night the scanner runs and querys all users with a nextTaskDate of today. This then gives us a result set that we can iterate over instead of having to list each user and search for tasks. It's a billion times faster.
Your best bet is to store the task date in miliseconds and make your query a "all users with next task date BEFORE now"...this way if the server is hosed you can execute tasks you may have missed.
We have an entire re-usable implmentation framework that we have patented (of which this code is a part) that answers most of these types of issues you are bringing up. It makes these implementations much much simpler, faster, scalable and maintainable.
this make sense?
Dana Reed
AegisUSA
Denver, CO 80211
[email protected]
773.412.3782
"Now hiring best-in-class IdM architects. Inquire via emai" -
Need help to get alternate or better way to write query
Hi,
I am on Oracle 11.2
DDL and sample data
create table tab1 -- 1 millions rows at any given time
id number not null,
ref_cd varchar2(64) not null,
key varchar2(44) not null,
ctrl_flg varchar2(1),
ins_date date
create table tab2 -- close to 100 million rows
id number not null,
ref_cd varchar2(64) not null,
key varchar2(44) not null,
ctrl_flg varchar2(1),
ins_date date,
upd_date date
insert into tab1 values (1,'ABCDEFG', 'XYZ','Y',sysdate);
insert into tab1 values (2,'XYZABC', 'DEF','Y',sysdate);
insert into tab1 values (3,'PORSTUVW', 'ABC','Y',sysdate);
insert into tab2 values (1,'ABCDEFG', 'WYZ','Y',sysdate);
insert into tab2 values (2,'tbVCCmphEbOEUWbxRKczvsgmzjhROXOwNkkdxWiPqDgPXtJhVl', 'ABLIOWNdj','Y',sysdate);
insert into tab2 values (3,'tbBCFkphEbOEUWbxATczvsgmzjhRQWOwNkkdxWiPqDgPXtJhVl', 'MQLIOWNdj','Y',sysdate);I need to get all rows from tab1 that does not match tab2 and any row from tab1 that matches ref_cd in tab2 but key is different.
Expected Query output
'ABCDEFG', 'WYZ'
'XYZABC', 'DEF'
'PORSTUVW', 'ABC'Existing Query
select
ref_cd,
key
from
select
ref_cd,
key
from
tab1, tab2
where
tab1.ref_cd = tab2.ref_cd and
tab1.key <> tab2.key
union
select
ref_cd,
key
from
tab1
where
not exists
select 1
from
tab2
where
tab2.ref_cd = tab1.ref_cd
);I am sure there will be an alternate way to write this query in better way. Appreciate if any of you gurus suggest alternative solution.
Thanks in advance.Hi,
user572194 wrote:
... DDL and sample data ...
create table tab2 -- close to 100 million rows
id number not null,
ref_cd varchar2(64) not null,
key varchar2(44) not null,
ctrl_flg varchar2(1),
ins_date date,
upd_date date
insert into tab2 values (1,'ABCDEFG', 'WYZ','Y',sysdate);
insert into tab2 values (2,'tbVCCmphEbOEUWbxRKczvsgmzjhROXOwNkkdxWiPqDgPXtJhVl', 'ABLIOWNdj','Y',sysdate);
insert into tab2 values (3,'tbBCFkphEbOEUWbxATczvsgmzjhRQWOwNkkdxWiPqDgPXtJhVl', 'MQLIOWNdj','Y',sysdate);
Thanks for posting the CREATE TABLE and INSERT statments. Remember why you go to all that trouble: so the people whop want to help you can re-create the problem and test their ideas. When you post statemets that don't work, it's just a waste of time.
None of the INSERT statements for tab2 work. Tab2 has 6 columns, but the INSERT statements only have 5 values.
Please test your code before you post it.
I need to get all rows from tab1 that does not match tab2 WHat does "match" mean in this case? Does it mean that tab1.ref_cd = tab2.ref_cd?
and any row from tab1 that matches ref_cd in tab2 but key is different.
Existing Query
select
ref_cd,
key
from
select
ref_cd,
key
from
tab1, tab2
where
tab1.ref_cd = tab2.ref_cd and
tab1.key <> tab2.key
union
select
ref_cd,
key
from
tab1
where
not exists
select 1
from
tab2
where
tab2.ref_cd = tab1.ref_cd
Does that really work? In the first branch of the UNION, you're referencing a column called key, but both tables involved have columns called key. I would expect that to cause an error.
Please test your code before you post it.
Right before UNION, did you mean
tab1.key != tab2.key? As you may have noticed, this site doesn't like to display the <> inequality operator. Always use the other (equivalent) inequality operator, !=, when posting here.
I am sure there will be an alternate way to write this query in better way. Appreciate if any of you gurus suggest alternative solution.Avoid UNION; it can be very inefficient.
Maybe you want something like this:
SELECT tab1.ref_cd
, tab1.key
FROM tab1
LEFT OUTER JOIN tab2 ON tab2.ref_cd = tab1.ref_cd
WHERE tab2.ref_cd IS NULL
OR tab2.key != tab1.key
; -
Efficient Way of Inserting records into multiple tables
Hello everyone,
Im creating an employee application using struts framework. One of the functions of the application is to create new employees. This will involve using one web form. Upon submitting this form, a record will be inserted into two separate tables. Im using a JavaBean (Not given here) between the JSP page and the Java file (Which is partly given below). Now this Java file does work (i.e. it does insert a record into two seperate tables).
My question is, is there a more efficient way of doing the insert into multiple tables (in terms of performance) rather than the way I've done it as shown below? Please note, I am using database pooling and MySQL db. I thought about Batch processing but was having problems writing the code for it below.
Any help would be appreciated.
Assad
package com.erp.ems.db;
import com.erp.ems.entity.Employee;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.Collection;
import java.util.ArrayList;
public class EmployeeDAO {
private Connection con;
public EmployeeDAO(Connection con) {
this.con = con;
// METHOD FOR CREATING (INSERTING) A NEW EMPLOYEE
public void create(Employee employee) throws CreateException {
PreparedStatement psemployee = null;
PreparedStatement psscheduleresource = null;
String sqlemployee = "INSERT INTO employee (FIRSTNAME,SURNAME,GENDER) VALUES (?,?,?)";
String sqlscheduleresource = "INSERT INTO scheduleresource (ITBCRATE,SKILLS) VALUES (?,?)";
try {
if (con.isClosed()) {
throw new IllegalStateException("error.unexpected");
// Insert into employee table
psemployee = con.prepareStatement(sqlemployee);
psemployee.setString(1,employee.getFirstName());
psemployee.setString(2,employee.getSurname());
psemployee.setString(3,employee.getGender());
// Insert into scheduleresource table
psscheduleresource = con.prepareStatement(sqlscheduleresource);
psscheduleresource.setDouble(1,employee.getItbcRate());
psscheduleresource.setString(2,employee.getSkills());
if (psemployee.executeUpdate() != 1 && psscheduleresource.executeUpdate() != 1) {
throw new CreateException("error.create.employee");
} catch (SQLException e) {
e.printStackTrace();
throw new RuntimeException("error.unexpected");
} finally {
try {
if (psemployee != null && psscheduleresource != null)
psemployee.close();
psscheduleresource.close();
} catch (SQLException e) {
e.printStackTrace();
throw new RuntimeException("error.unexpected");
}Hi ,
U can use
set Auto Commit function here ..
let it be false first
and when u do with u r all queries ..
make it true
this function take boolean values
i e helful when u want record to be inserted in all or not at all..
Hope it helps -
Need to get the input dynamically and write the records
hi everyone,
iam having a doubt in scripts,
1.i need to get the input from user
(input is the table name)
2.if the input matches with the tables in the database and the n it has to write the records in a csv format.
3.the fields should be delimited by comma.
is it possible to write a script or procedure.
the script should be a generic one.
thanks and Regards,
R.Ratheeshhi
,,actually my column_names are.
select T24_ACCOUNT_NUMBER||','||
NULLIF(T24_CUSTOMER,NULL)||','||
NULLIF(T24_CATEGORY,NULL)||','||
NULLIF(T24_ACCOUNT_TITLE_1,'')||','||
NVL(T24_ACCOUNT_TITLE_2,'')||','||
NULLIF(T24_SHORT_TITLE,'')||','||
NULLIF(T24_SHORT_TITLE,'')||','||
NULLIF(T24_POSITION_TYPE,'')||','||
NULLIF(T24_CURRENCY,'')||','||
NULLIF(T24_LIMIT_REF,NULL)||','||
NULLIF(T24_ACCOUNT_OFFICER,NULL)||','||
NULLIF(T24_OTHER_OFFICER,NULL)||','||
NULLIF(T24_POSTING_RESTRICT,NULL)||','||
NULLIF(T24_RECONCILE_ACCT,'')||','||
NULLIF(T24_INTEREST_LIQU_ACCT,NULL)||','||
NULLIF(T24_INTEREST_COMP_ACCT,NULL)||','||
NULLIF(T24_INT_NO_BOOKING,'')||','||
NULLIF(T24_REFERAL_CODE,NULL)||','||
NULLIF(T24_WAIVE_LEDGER_FEE,'')||','||
NULLIF(T24_PASSBOOK,'')||','||
NVL(TO_CHAR(T24_OPENING_DATE,'YYYYMMDD'),'')||','||
NULLIF(T24_LIMK_TO_LIMIT,'')||','||
NULLIF(T24_CHARGE_ACCOUNT,NULL)||','||
NULLIF(T24_CHARGE_CCY,'')||','||
NULLIF(T24_INTEREST_CCY,'')||','||
NULLIF(T24_ALT_ACCT_IDa,NULL)||','||
NULLIF(T24_PREMIUM_TYPE,'')||','||
NULLIF(T24_PREMIUM_FREQ,'')||','||
NULLIF(T24_JOINT_HOLDER,NULL)||','||
NULLIF(T24_RELATION_CODE,NULL)||','||
NULLIF(T24_JOINT_NOTES,'')||','||
NULLIF(T24_ALLOW_NETTING,'')||','||
NULLIF(T24_LEDG_RECO_WITH,NULL)||','||
NULLIF(T24_STMT_RECO_WITH,NULL)||','||
NULLIF(T24_OUR_EXT_ACCT_NO,'')||','||
NULLIF(T24_RECO_TOLERANCE,NULL)||','||
NULLIF(T24_AUTO_PAY_ACCT,NULL)||','||
NULLIF(T24_ORIG_CCY_PAYMENT,'')||','||
NULLIF(T24_AUTO_REC_CCY,'')||','||
NULLIF(T24_DISPO_OFFICER,NULL)||','||
NULLIF(T24_DISPO_EXEMPT,'')||','||
NULLIF(T24_ICA_DISTRIB_RATIO,NULL)||','||
NULLIF(T24_LIQUIDATION_MODE,'')||','||
NULLIF(T24_INCOME_TAX_CALC,'')||','||
NULLIF(T24_SINGLE_LIMIT,'')||','||
NULLIF(T24_CONTINGENT_INT,'')||','||
NULLIF(T24_CREDIT_CHECK,'')||','||
NULLIF(T24_AVAILABLE_BAL_UPD,'')||','||
NULLIF(T24_CONSOLIDATE_ENT,'')||','||
NULLIF(T24_MAX_SUB_ACCOUNT,NULL)||','||
NULLIF(T24_MASTER_ACCOUNT,'')||','||
NULLIF(T24_FUND_ID,'') FROM T24_CACCOUNT;
this is the order
but while my output looks like
SQL> set serveroutput on
SQL> set verify off
SQL> set heading off
SQL> spool /emea/dbtest/tamdbin/bnk/bnk.run/DATA.BP/SAM_ACC.csv
SQL> exec input_table('T24_CACCOUNT');
declare cursor c2 is select
rownum||','||T24_WAIVE_LEDGER_FEE||','||T24_PASSBOOK||','||T24_OPENING_DATE||','
||T24_LIMK_TO_LIMIT||','||T24_CHARGE_ACCOUNT||','||T24_CHARGE_CCY||','||T24_INTE
REST_CCY||','||T24_ALT_ACCT_IDA||','||T24_PREMIUM_TYPE||','||T24_PREMIUM_FREQ||'
,'||T24_JOINT_HOLDER||','||T24_RELATION_CODE||','||T24_JOINT_NOTES||','||T24_ALL
OW_NETTING||','||T24_LEDG_RECO_WITH||','||T24_STMT_RECO_WITH||','||T24_OUR_EXT_A
CCT_NO||','||T24_RECO_TOLERANCE||','||T24_AUTO_PAY_ACCT||','||T24_ORIG_CCY_PAYME
NT||','||T24_AUTO_REC_CCY||','||T24_DISPO_OFFICER||','||T24_DISPO_EXEMPT||','||T
24_ICA_DISTRIB_RATIO||','||T24_LIQUIDATION_MODE||','||T24_INCOME_TAX_CALC||','||
T24_SINGLE_LIMIT||','||T24_CONTINGENT_INT||','||T24_CREDIT_CHECK||','||T24_AVAIL
ABLE_BAL_UPD||','||T24_CONSOLIDATE_ENT||','||T24_MAX_SUB_ACCOUNT||','||T24_MASTE
R_ACCOUNT||','||T24_FUND_ID||','||T24_ACCOUNT_NUMBER||','||T24_CUSTOMER||','||T2
4_CATEGORY||','||T24_ACCOUNT_TITLE_1||','||T24_ACCOUNT_TITLE_2||','||T24_SHORT_T
ITLE||','||T24_MNEMONIC||','||T24_POSITION_TYPE||','||T24_CURRENCY||','||T24_LIM
IT_REF||','||T24_ACCOUNT_OFFICER||','||T24_OTHER_OFFICER||','||T24_POSTING_RESTR
ICT||','||T24_RECONCILE_ACCT||','||T24_INTEREST_LIQU_ACCT||','||T24_INTEREST_COM
P_ACCT||','||T24_INT_NO_BOOKING||','||T24_REFERAL_CODE SRC from T24_CACCOUNT; r2
c2%rowtype; begin for r2 in c2 loop dbms_output.put_line(r2.SRC); end loop;
end;
1,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,222222284001,2222222,6001,,,,,TR,USD,,,,,,,,
2,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,222222384001,2222223,6001,,,,,TR,USD,,,,,,,,
3,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,222222484001,2222224,6001,,,,,TR,USD,,,,,,,,
4,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,222222584001,2222225,6001,,,,,TR,USD,,,,,,,,
5,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,222222684001,2222226,6001,,,,,TR,USD,,,,,,,,
6,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,222222284001,2222222,6001,,,,,TR,USD,,,,,,,,
7,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,222222384001,2222223,6001,,,,,TR,USD,,,,,,,,
8,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,222222484001,2222224,6001,,,,,TR,USD,,,,,,,,
9,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,222222584001,2222225,6001,,,,,TR,USD,,,,,,,,
10,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,222222684001,2222226,6001,,,,,TR,USD,,,,,,,
11,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,222222284001,2222222,6001,,,,,TR,USD,,,,,,,
12,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,222222384001,2222223,6001,,,,,TR,USD,,,,,,,
13,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,222222484001,2222224,6001,,,,,TR,USD,,,,,,,
14,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,222222584001,2222225,6001,,,,,TR,USD,,,,,,,
15,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,222222684001,2222226,6001,,,,,TR,USD,,,,,,,
here if we look at the select statement the previous one ,its starts from middle and
picking up the records.i dont know y.and also its not writing the records in the path (spool)which i have given..can u pls check it out if possible.
thanks ,.
R.Ratheesh -
Most efficient way to delete "removed" photos from hard disk?
Hello everyone! Glad to have this great community to come to for help. I searched for this question but came up with no hits. If it's already been discussed, I apologize and would love to be directed to the link.
My wife and I have been using LR for a long time. We're currently on version 4. Unfortunately, she's not as tech-savvy or meticulous as I am, and she has been unknowingly "Removing" photos from the LR catalogues when she really meant to delete them from the hard disk. That means we have hundreds of unwanted raw photo files floating around in our computer and no way to pick them out from the ones we want! As a very organized and space-conscious person, I can't stand the thought. So my question is, what is the most efficient way to permanently delete these unwanted photos from the hard disk
I did fine one suggestion that said to synchronize the parent folder with their respective catalogues, select all the photos in "Previous Import," and delete those, since they will be all of the photos that were previously removed from the catalogue.
This is a great suggestion, but it probably wouldn't work for all of my catalogues since my file structure is organized by date (the default setting for LR). So, two catalogues will share the same "parent folder" in the sense that they both have photos from May 2013, but if I synchronize May 2013 with one, then it will get all the duds PLUS the photos that belong in the other catalogue.
Does anyone have any suggestions? I know there's probably not an easy fix, and I'm willing to put in some time. I just want to know if there is a solution and make sure I'm working as efficiently as possible.
Thank you!
KennethI have to agree with the comment about multiple catalogs referring to images that are mixed in together... and the added difficulty that may have brought here.
My suggestions (assuming you are prepared to combine the current catalogs into one)
in each catalog, put a distinctive keyword onto all the images so that you can later discriminate these images as to which particular catalog they were formerly in (just in case this is useful information later)
as John suggests, use File / "Import from Catalog" to bring all LR images together into one catalog.
then in order to separate out the image files that ARE imported to LR, from those which either never were / have been removed, I would duplicate just the imported ones, to an entirely separate and dedicated disk location. This may require the temporary use of an external drive, with enough space for everything.
to do this, highlight all the images in the whole catalog, then use File / "Export as Catalog" selecting the option "include negatives". Provide a filename and location for the catalog inside your chosen new saving location. All the image files that are imported to the catalog will be selectively copied into this same location alongside the new catalog. The same relative arrangement of subfolders will be created there, for them all to live inside, as is seen currently. But image files that do not feature in LR currently, will be left behind by this operation.
your new catalog is now functional, referring to the copied image files. Making sure you have a full backup first, you can start deleting image files from the original location, that you believe to be unwanted. You can do this safe in the knowledge that anything LR is actively relying on, has already been duplicated elsewhere. So you can be quite aggressive at this, only watching out for image files that are required for other purposes (than as master data for Lightroom) - e.g., the exported JPG files you may have made.
IMO it is a good idea to practice a full separation of image files used in your LR image library, from all other image files. This separation means you know where it is safe to manage images freely using the OS, vs where (what I think of as the LR-managed storage area) you need to bear LR's requirements constantly in mind. Better for discrete backup, too.
In due course, as required, the copied image files plus catalog can be moved bodily to another drive (for example, if they have been temporarily put on an external drive, and you want to store them on your main internal one again). This then just requires a single re-browsing of their parent folder's location, in order to correct LR's records inside this catalog, as to the image files' changed addresses.
If you don't want to combine the catalogs into one, a similar set of operations as above, can be carried out for each separate catalog you have now. This will create a separate folder structure in each case, containing just those duplicated image files. Once this has been done for all catalogs, you can start to clean up the present image files location. IMO this is very much the laborious and inflexible option, so far as future management of the total body of images is concerned... though there may still be some overriding reason for working that way.
RP -
Best way to write SELECT statement
Hi,
I am selecting fields from one table, and need to use two fields on that table to look up additional fields in two other tables.
I do not want to use a VIEW to do this.
I need to keep all records in the original selection, yet I've been told that it's not good practice to use LEFT OUTER joins. What I really need to do is multiple LEFT OUTER joins.
What is the best way to write this? Please reply with actual code.
I could use 3 internal tables, where the second 2 use "FOR ALL ENTRIES" to obtain the additional data. But then how do I append the 2 internal tables back to the first? I've been told it's bad practice to use nested loops as well.
Thanks.Hi,
in your case having 2 internal table to update the one internal tables.
do the following steps:
*get the records from tables
sort: itab1 by key field, "Sorting by key is very important
itab2 by key field. "Same key which is used for where condition is used here
loop at itab1 into wa_tab1.
read itab2 into wa_tab2 " This sets the sy-tabix
with key key field = wa_tab1-key field
binary search.
if sy-subrc = 0. "Does not enter the inner loop
v_kna1_index = sy-tabix.
loop at itab2 into wa_tab2 from v_kna1_index. "Avoiding Where clause
if wa_tab2-keyfield <> wa_tab1-key field. "This checks whether to exit out of loop
exit.
endif.
****** Your Actual logic within inner loop ******
endloop. "itab2 Loop
endif.
endloop. " itab1 Loop
Refer the link also you can get idea about the Parallel Cursor - Loop Processing.
http://wiki.sdn.sap.com/wiki/display/Snippets/CopyofABAPCodeforParallelCursor-Loop+Processing
Regards,
Dhina.. -
Efficient way to read CLOB data
Hello All,
We have a stored procedure in oracle with CLOB out parameter, when this is executed from java the stored proc is executed fast but reading data from clob datatype using 'subString' functionality takes more time (approx 6sec for 540kb data).
Could someone please suggest what is the efficient way to read data from Clob (We need to read data form clob and write into a file).
Thanks & Regards,
Prashant,Hi,
you can try buffered reading / writing the data, it usually speeds the process up.
See example here:
http://www.oracle.com/technology/sample_code/tech/java/sqlj_jdbc/files/advanced/LOBSample/LOBSample.java.html -
EFFICIENT way of escalating an open task
I need to escalate TASKS that are still open after 31 days.
I figure i need 2 workflows to do this.
As i see it right now:
1st WF. Waits for 31 days after the task has been created. On the 31st day it changes a read only field called "escalate" to YES.
2nd WF checks for changes in tasks where: If (Status=OPEN AND escalate<>pre(escalate)) is true then send an escalete email or task.
Is there a more efficient way of doing this?
TIA
PaulIs there a reason you want two worfklows? Why not put an e-mail action after the Wait on the same workflow? If you check the "Reevaluate Rule Conditions After Wait" checkbox on the Wait action, the workflow rule will be re-evaluated after your 31 days... so it would only send the e-mail message if the Task is still open (assuming your workflow condition is set to look at Status = Open).
Chris -
SQL query with multiple tables - what is the most efficient way?
Hello I am learning PL/SQL. I have a simple procedure where I need to find number of employees and departments per location as per user input of location_id.
I have 3 Tables:
LOCATIONS
location_id (pk)
location_name
DEPARTMENTS
department_id (pk)
location_id (fk)
department_name
EMPLOYEES
employee_id (pk)
department_id (fk)
employee_name
1 Location can have 0-MANY Departments
1 Employee has 1 Department
Here is the query I came up with for PL/SQL procedure:
/*Ecount, Dcount are NUMBER variables */
SELECT SUM (EmployeeCount), COUNT(DepartmentNumber)
INTO Ecount, Dcount
FROM
(SELECT COUNT(employee_id) EmployeeCount, department_id DepartmentNumber
FROM employees
GROUP BY department_id
HAVING department_id IN
(SELECT department_id
FROM departments
WHERE location_id = userInput));
I do get the correct result, but I am just wondering if my query is on the right track and if there is a more "efficient" way of doing this.
Thanks in advance for helping a newbie out.Hi,
Welcome to the forum!
Something like this will be more efficient:
SELECT COUNT (employee_id) AS ECount
, COUNT (DISTINCT department_id) AS DCount
FROM employees
WHERE department_id IN ( SELECT department_id
FROM departments
WHERE location_id = :userInput
;You should also try a join instead of the IN subquery.
For efficiency, do only the things you need to do.
For example, you don't need a count of employees in each department, so don't compute one. That means you won't need the in-line view, so don't have one.
You don't need PL/SQL for this job, so don't use PL/SQL if you don't have to. (I realize this question was out of context, so you may have good reasons for doing this in PL/SQL.)
Do all filtering as early as possible. Don't waste effort computing things that won't be used .
A particular example of this is: Never use a HAVING clause when you can use a WHERE clause. What's the difference between a WHERE clause and a HAVING clause? The WHERE clause is applied before aggregate functions are computed, and the HAVING clause is applied after; there's no other difference. Therefore, if the HAVING clause isn't referencing an aggregate function, it could be done in a WHERE clause instead.
Maybe you are looking for
-
I have had 2 iPads for a couple years, both using the same apple id. One is used primarily by my husband. Today I got notifications that my husband's ipad is now using all the same email addresses and phone number associated with mine. How do I chang
-
Hi, my macbook screen goes black at random points when I am using it. I know it is still running as music continues to play and I can vaguely make out the content of what is on the screen. Sometimes the screen will flicker between black and normal. C
-
Development of Z transaction for MM Quotation CS
Dear All, I will be comparing quotation using ME49 I want to have a Material & Vendor wise Z Transaction for my Client in which ME49/ ZME49 should include basic price, discount, packing & forwarding , BED, ECS ,AT1, VAT /CST , Freight, Landed Cost,
-
hi Gurus, Actually i am learning SAP QM Module module and i maintain Creation Material - QM view Creation of Quality Information Record Change of Quality Information Record Create Inspection Method Create Master Inspection Characteristic Creating
-
How to call a Oracle Form from within the oracle APEX application
Hi, I am new for Oracle APEX. I have a requirment where need to call a Oracle form (.fmx file) from within the Oracle APEX application. Can someone help me out ? it would be a great help. Thanks