Addressing/comparing record data in a .ctl file.
Unfortunatly I am importing a flat file with data redundancy. I would like to dedup this data during the load. Below is the portion of the .ctl file I am unable to get working. The issue is in bold, SQL*LOADER does not like the position to position comparison. I have tried the bind method as well, but no luck.
Please let me know how I can compare two strings within the same record for a SQL*LOADER control file..
Agent #1 ID is positions 135-139 and Agent #2 ID is positions 964-968. Agent #1 always gets loaded ( based on the other portions of the WHEN clause ). Agent #2 should not get loaded if this is the same agent ID as Agent #1.
--Agent #1
INTO TABLE myschema.Agent
WHEN ( 1489 : 1490 )='04' and ( 1491 : 1492 )='02' and ( 135 : 139 ) != '00000'
AGENT_NO POSITION( 135 : 139 ) "TRIM(:AGENT_NO)",
CASE_NO POSITION( 1 : 10 ) "TRIM(:CASE_NO)"
Agent #2 If Not the same As agent 1
INTO TABLE myschema.Agent
WHEN ( 1489 : 1490 ) = '04' AND
( 1491 : 1492 ) = '02' AND
( 964 : 968 ) != '00000' AND
( 135 : 139 ) != ( 964 : 968 )
AGENT_NO POSITION( 964 : 968 ) "TRIM(:AGENT_NO)",
CASE_NO POSITION( 1 : 10 ) "TRIM(:CASE_NO)"
)
Got it!
Thank you very much !
-Pamsath
Similar Messages
-
Comparing SQL Data Results with CSV file contents
I have the following scenario that I need to resolve and I'm unsure of how to approach it. Let me explain what I am needing and what I have currently done.
I've created an application that automatically marks assessments that delegates complete by comparing SQL Data to CSV file data. I'm using C# to build the objects required that will load the data from SQL into a dataset which is then compared to the
associated CSV file that contains the required results to mark against.
Currently everything is working as expected but I've noticed that if there is a difference in the number of rows returned into the SQL-based dataset, then my application doesn't mark the items at all.
Here is an example:
ScenarioCSV contains 4 rows with 8 columns of information, however, let's say that the delegate was only able to insert 2 rows of data into the dataset. When this happens it marks everything wrong because row 1 in both CSV and dataset files were correct,
however, row 2 in the dataset holds the results found in row 4 in the CSV file and because of this it is comparing it against row 2 in the CSV file.
How can I check whether a row, regardless of its order, can be marked as it does exist but not in the same order, so I don't want the delegate to lose marks just because the row data in the dataset is not perfectly in the same order of the row data
in the CSV file???
I'm at a loss and any assistance will be of huge help to me. I have implemented a ORDER BY clause in the dataset and ensured that the same order is set in the CSV file. This has helped me for scenarios where there are the right number of rows in the dataset,
but as soon as there is 1 row that is missing in the dataset, then the marking just doesn't allow for any marks for both rows even if the data is correct.
I hope I've made sense!! If not, let me know and I will provide a better description and perhaps examples of the dataset data and the csv data that is being compared.
Thanks in advance....I would read the CSV into a datatable using oledb. Below is code I wrote a few weeks ago to do this.
Then you can compare two datatables by a common primary key (like ID number)
Below is the webpage to compare two datatables
http://stackoverflow.com/questions/10984453/compare-two-datatables-for-differences-in-c
You can find lots of examples by perform following google search
"c# linq compare two dattatable"
//Creates a CSVReader Class
public class CSVReader
public DataSet ReadCSVFile(string fullPath, bool headerRow)
string path = fullPath.Substring(0, fullPath.LastIndexOf("\\") + 1);
string filename = fullPath.Substring(fullPath.LastIndexOf("\\") + 1);
DataSet ds = new DataSet();
try
if (File.Exists(fullPath))
string ConStr = string.Format("Provider=Microsoft.Jet.OLEDB.4.0;Data Source={0}" + ";Extended Properties=\"Text;HDR={1};FMT=Delimited\\\"", path, headerRow ? "Yes" : "No");
string SQL = string.Format("SELECT * FROM {0}", filename);
OleDbDataAdapter adapter = new OleDbDataAdapter(SQL, ConStr);
adapter.Fill(ds, "TextFile");
ds.Tables[0].TableName = "Table1";
foreach (DataColumn col in ds.Tables["Table1"].Columns)
col.ColumnName = col.ColumnName.Replace(" ", "_");
catch (Exception ex)
MessageBox.Show(ex.Message);
return ds;
jdweng -
50 million records Data extract to CSV files
Hi Experts,
Could you please suggest me, we need to extract 50 millions DSO records to CSV Files using Open hub in BI 7.0.
We are having create date is key field in my DSO.
Thanks
VijayHi,
Give some value ranges on creation date and download the data to files.
Hope it helps.... -
How to compare the data in a highscore file(.txt) for a game?
As mention above,i would like to check the highscore everytime the game has finished,and updates it, what classes should be used? FileReader, FileWriter, or Properties? If so, how the codes is likely to be? Thanks a lot.
I would discourage you from using a plain old text file to save your high scores in. The obvious reason being is that your little brother can just edit the file in notepad, then run to you and taunt, "I've got the highest score, nyah nyah nyah!"
If you must do it, I would use PrintWriter and simply print out each line as needed. Reading the lines back into the game is only a tad more difficult, but you can simply read in each line as a String, and store each String in an array, or a Vector if you don't know how many high scores you will have.
In the long run, you should create a serializable object that contains either the Vector or the Array of scores and instruct the object to save itself or load itself as needed. This is essentially how Properties work except you have more control over the internal data format.
On second thought, Properties would work just fine, but you cannot guarantee that the scores will be saved in a particular order - you would have to add an additional routine to read in each Property (specifically, each name/score pair) and then sort them. Once you do that, there's no real difference between a Properties file and rolling your own Serializable array. -
Compare table data with data in csv file
Hi, I have a database table with ~32 milj rows. I want to compare the data from a csv file(~ 1500000 rows) with the data in a specific database column. And as a result I want to see what rows exists in both the csv file and db and what records in csv file are unique. The text I'm searching for is VARCHAR2. I am not allowed to export the database table so that's not an alternative. This is also a production environment so one need to be careful not disturbing the production.
Rgds J2c0eaa24-46c9-4ff7-9f01-564f37567899 wrote:
Hi, I have a database table with ~32 milj rows. I want to compare the data from a csv file(~ 1500000 rows) with the data in a specific database column. And as a result I want to see what rows exists in both the csv file and db and what records in csv file are unique. The text I'm searching for is VARCHAR2. I am not allowed to export the database table so that's not an alternative. This is also a production environment so one need to be careful not disturbing the production.
Rgds J
How do I ask a question on the forums?
https://forums.oracle.com/message/9362002#9362002
Does CVS fille reside on DB Server system now? -
Recording data at particular iterations and writing to text file
Hi all,
this is my first time posting on the NI boards. I'm running into a couple problems I can't seem to figure out how to fix.
I'm collecting data using a LabJack U3-HV daq. I've taken one of the out-of-the-box streaming functions that comes with the LabJack and modified it for my purposes. I am attempting to simply save the recorded data to a text file in columns, one for each of my 4 analog strain gauge inputs, and one for time. For some reason when the 'write to measurement file.vi' executes it is puts everything in rows, and the data is unintelligible.
The 2nd issue I am facing, which is not currently visible in my vi, is that I am running my test for 60,000 cycles, which generates a ton of data. I'm measuring creep/fatigue with my strain gages so I don't need data for every cycle, probably for the first 1000, then the 2k, 4k, 6k, 8k, 10k, 20k, etc. More of an exponential curve. I tried using some max/min functions and then matching the 'write to measurement file.vi' with a case structure that only permitted it to write for particular iterations, but can't seem to get it to work.
Thanks in advance for any help!
Attachments:
3.5LCP strain gages v2.vi 66 KBHey carfreak,
I've attached a screenshot that shows three different ways of trying to keep track of incrementing data and/or time in a while loop. The top loop just shows a program that demonstrates how shift registers can be used to transfer data between loops. This code just writes the iteration value to the array using the Build Array function.
The first loop counts iterations in an extremely round-about way... the second shows that really you can just build the array directly using the iteration count (the blue "i" in the while loop is just an iteration counter).
The final loop shows how you can use a time stamp to actually keep track of the relative time when a loop executes.
Please note that these three should not actually be implemented together in one VI. I just built them in one BD for simplicity's sake for the screenshot. As described above, the producer-consumer architecture should be used when running parallel loops.
Does that answer your question?
Chris G
Applications Engineer
National Instruments
Attachments:
While Loops and Iterations.JPG 83 KB -
version 6 of QT showed the original record date of a DV file in the properties box but version 7 doesn't. Is there a way of retrieving this information in version 7?
TxHi Robert,
Welcome to Sony Community!
Yes, However, the date and time are not displayed while shooting. They are displayed only during playback. Make sure that the date and time are set in the Camera before recording. Information about setting date and time is mentioned on page#56 of the Handbook.
http://www.docs.sony.com/release/MHSTS22_handbook.pdf
Please mark the solution as accepted if you find information helpful.
Thank you for your post. -
Error while load the data from CSV with CTL file..?
Hi TOM,
When i try to load data from CSV file to this table,
CTL File content:
load data
into table XXXX append
Y_aca position char (3),
x_date position date 'yyyy/mm/dd'
NULLIF (x_date = ' '),
X_aca position (* + 3) char (6)
"case when :Y_aca = 'ABCDDD' and :XM_dt is null then
decode(:X_aca,'AB','BA','CD',
'DC','EF','FE','GH','HG',:X_aca)
else :X_aca
end as X_aca",
Z_cdd position char (2),
XM_dt position date 'yyyy/mm/dd'
NULLIF XM_dt = ' ',
When I try the above CTL file; geting the following error..
SQL*Loader-281: Warning: ROWS parameter ignored in parallel mode.
SQL*Loader-951: Error calling once/load initialization
ORA-02373: Error parsing insert statement for table "XYZ"."XXXX".
ORA-00917: missing commaPossible Solutions
Make sure that the data source is valid.
Is a member from each dimension specified correctly in the data source or rules file?
Is the numeric data field at the end of the record? If not, move the numeric data field in the data source or move the numeric data field in the rules file.
Are all members that might contain numbers (such as "100") enclosed in quotation marks in the data source?
If you are using a header, is the header set up correctly? Remember that you can add missing dimension names to the header.
Does the data source contain extra spaces or tabs?
Has the updated outline been saved? -
Field in data file exceeds maximum length - CTL file error
Hi,
I am loading data in new system using CTL file. But I am getting error as 'Field in data file exceeds maximum length' for few records, other records are processed successfully. I have checked the length of the error record in the extract file, it is less than the length in the target table, VARCHAR2 (2000 Byte). Below is the example of error data,
Hi Rebecca~I have just spoken to our finance department and they have agreed that the ABCs payments made can be allocated to the overdue invoices, can you send any future invoices direct to me so that I can get them paid on time.~Hope this is ok ~Thanks~Terry~
Is this error caused because of the special characters in the string?
Below is the ctl file I am using,
OPTIONS (SKIP=2)
LOAD DATA
CHARACTERSET WE8ISO8859P1
INFILE '$FILE'
APPEND
INTO TABLE "XXDM_DM_17_ONACCOUNT_REC_SRC"
WHEN (1)!= 'FOOTER='
FIELDS TERMINATED BY '|'
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS (
<Column_name>,
<Column_name>,
COMMENTS,
<Column_name>,
<Column_name>
Thanks in advance,
AdityaHi,
I suspect this is because of the built in default length of character datatypes in sqldr - it defaults to char(255) taking no notice of what the actual table definition is.
Try adding CHAR(2000), to your controlfile so you end up with something like this:
OPTIONS (SKIP=2)
LOAD DATA
CHARACTERSET WE8ISO8859P1
INFILE '$FILE'
APPEND
INTO TABLE "XXDM_DM_17_ONACCOUNT_REC_SRC"
WHEN (1)!= 'FOOTER='
FIELDS TERMINATED BY '|'
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS (
<Column_name>,
<Column_name>,
COMMENTS CHAR(2000),
<Column_name>,
<Column_name>
Cheers,
Harry -
How we can restrict record in CTL file on the basis of other table ?
Hello all,
How we can restrict record in CTL file on the basis of other table ?
Eg.
I have following control file to load the records in the table through the sql loader.
LOAD DATA
INTO TABLE THIST APPEND
FIELDS TERMINATED BY "|" TRAILING NULLCOLS
LNUM POSITION(1) Char "substr(:LOAN_NUM, 4, 13)",
TSRNUM Char "rtrim:TRAN_SR_NUM)" ,
TPROCDT Char "to_char(to_date rtrim:TRAN_PROC_DT), 'MMDDYYYY'), 'YYYYMMDD')"
I have another table c all TFILE in which I have LNUM. I want to import only those records from input text file using the control file and sql loader in which LNUM is exist in the TFILE.
So how i can restrict it in the Control File.
Thanks
Kamlesh Gujarathi
[email protected]Hello Satyaki De.
Thank you very much for your suggestion but my Private information is totally apart from this question & I already I have changed each and every information from the question.
Thanks
Kamlesh Gujarathi
[email protected] -
Hoe to write ctl file to load same data into two coulmns in a table?
hi,
This is a sample text file
,"21820","Fairbanks, AK","METRO","1","21820"
,"11260","Anchorage, AK","METRO","1","11260"
,"27940","Juneau, AK","MICRO","2","27940"
,"28980","Kodiak, AK","MICRO","2","28980"
and i have to load ot in a table cbsa_ref
SQL> desc cbsa_ref;
Name Null? Type
CBSA_NAME VARCHAR2(100)
CBSA_ID VARCHAR2(5)
DISP_CBSA_NAME VARCHAR2(100)
I have to insert the first column in text file into CBSA_ID column.
And i have to insert the second column in text file into both CBSA_NAME & DISP_CBSA_NAME. Please tell me how to do this?
If we write the ctl file as ,
load data
APPEND
INTO cbsa_ref
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
CBSA_ID
,CBSA_NAME
,DISP_CBSA_NAME
only the third column in text file will go into DISP_CBSA_NAME.
BenevenSQL> !cat load1.dat
,"21820","Fairbanks, AK","METRO","1","21820"
,"11260","Anchorage, AK","METRO","1","11260"
,"27940","Juneau, AK","MICRO","2","27940"
,"28980","Kodiak, AK","MICRO","2","28980"
SQL> !cat load1.ctl
load data
TRUNCATE
INTO table cbsa_ref
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
NULL_COL filler,
CBSA_ID ,
CBSA_NAME ,
DISP_CBSA_NAME ":CBSA_NAME"
SQL> !sqlldr userid=scott/tiger control=load1.ctl data=load1.dat
SQL*Loader: Release 10.2.0.3.0 - Production on Fri Feb 23 10:17:01 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Commit point reached - logical record count 5
SQL> select * from cbsa_ref;
CBSA_NAME CBSA_ID DISP_CBSA_NAME
Fairbanks, AK 21820 Fairbanks, AK
Anchorage, AK 11260 Anchorage, AK
Juneau, AK 27940 Juneau, AK
Kodiak, AK 28980 Kodiak, AK
SQL>
Best regards
Maxim -
Best Approach to Compare Current Data to New Data in a Record.
Which is the best approach in comparing current data to new data for every single record before making updates?
What I have is a table with 5,000 records that need to be updated every week with data from a data feed, but I dont want to update every single record week after week. I want to update only records with different data.
I can think of these options:
1) two cursors (one for current data and one for new)
2) one cursor for new data and one varray for current data
3) one cursor for new data and a select into for current data.
Thanks for your help.I don't recommend it over merge, but in theory you could use a checksum (OWA_OPT_LOCK.checksum()) on the rows and if they differ then copy the new row into the destination table. Or you might be able to use rowscn to see if things have changed.
Like I said, I don't know that I 'd take either approach over merge, but they are options.
Gaff
Edited by: Gaff on Feb 11, 2009 3:25 PM
Actually, rowscn between 2 tables may not be an option. I know you can turn a rowscn into a time value so if that rowscn is derived from anything but the column values for the row that would foil that approach. -
SQLLDR: (CTL file) How to ONLY load 1 record of the CSV file
Hello,
We are in 11g, and we get CSV data.
I'd like to know if there is a way in the CTL file to specify that I only want to load first row ?
I know how to do it if there is a common unique value in the first row (WHEN myColumn = 'value1' )
BUT, in that case, first row doesn't hold any specific value, and I think that I have to tell the loader to take only first row.
I hope it is clear.
Here is the CTL, in the case we can get a specific value for first row:
LOAD DATA
CHARACTERSET WE8ISO8859P1
APPEND
CONTINUEIF LAST != ";"
INTO TABLE IMPORT_FIRST_LINES
WHEN COL_3 = 'firstRowValue'
FIELDS TERMINATED BY ';' OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
( COL_1 CHAR
, COL_2 CHAR
, COL_3 CHAR)
{code}
So, I think I need to change the *WHEN clause*.
I hope it is clear enough for you to understand.
Thanks in advance,
Olivierjust change the control file like this,
LOAD DATA
CHARACTERSET WE8ISO8859P1
APPEND
CONTINUEIF LAST != ";"
INTO TABLE IMPORT_FIRST_LINES
WHEN COL_3 = 'firstRowValue'
FIELDS TERMINATED BY ';' OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
( COL_1 CHAR
----------- if you only want to load 1st column of csv file to first column in table. -
How to read an ascii file and record data in a 2d array
HI everyone,
I have an experimental data file in ascii format. It contains 10 data sets.
I'm trying to read the ascii file and record data in a 2d array for further analysis,
but I still could not figure it out how to do it.
Please help me to get this done.
Here I have attaced the ascii file.
-Sam
Attachments:
data.asc 123 KB
2015-01-27_18-01-31_fourier_A2-F-abs.zip 728 KBGot it!
Thank you very much !
-Pamsath -
Comparing two data records of same IDOC
Hi,
In PI, we need to compare two data records of same IDOC to figure out if what type change occurred. For example, BENFIT3 IDOC contains data records tilted "E1BEN04". When there are multiple "E1BEN04" data records, then we need to compare the data inside them to see whether name, or data of birth or SSN is changed.
Has anybody came across this before. Your help is much appreciated.
Thanks
-TejaIf it is very few fields then you could use graphical mapping to determine the changes.
Maybe you are looking for
-
How can I set up multiple spaces (desktops) on my new MacBook Pro 10.9.5? I have six set up on my other computer MacBook Pro 10.8.5 but can't find how to do it on this new one. Would appreciate help on this. Cheers.
-
Fastest way to transfer information from my old MacBook Pro to a new one?
What is fastest way to transfer data between MacBook pros? It takes forever on wi-if.
-
Upgrade of 9124e from 3.3.1a to 4.2.1b hangs during Supervisor reset
Has anyone else using 9124e switches in HP c7000 blade chassis run into this? I tried to upgrade the firmware from 3.3.1a to 4.2.1b. According to the upgrade guides this should be a non-disruptive upgrade, and we successfully did this upgrade on ou
-
UCCX 8.0(2) SU3 - Wrapup Time not working, or?!
Hi there! Have anybody tried if Wrapup time on a CSQ is working in the above release? Can't get it to work on my CSQ's! Calls comes into to the same agent immediately after he/she end a call! Don't know if it's normal behaviour when only ONE Agent ar
-
ABAP code help for Bex broadcasting by event!
Hello I'm trying to broadcast a report for a specific time period as per factory calendar. I tried using System Time Point :RSRD_BROADCAST_FOR_TIMEPOINT but its not showing when In the broadcaster when I put restriction of factory calendar so i follo