Handling bad records in a file.
Hi,
I am using external tables for loading data from text files. I am doing it well. I came across a case, where some records are not good, those records are diverted to bad file.
In that case, we are losing the valuable data. We can do some validation and we can use it.
Could anyone please suggest me the way we could perform this.
And I have one more doubt. Using OWB we can create single external table for multiple files. I would like to differentiate the records of each file, by adding filename as one extra field.
Can we do this using owb?
Any suggestions are welcome.
Thanks and regards
Gowtham Sen.
Indeed,
OWB is not a magic tool, but just a good ETL tool that will do what you tell it to :)
You must first figure out what cases you come a cross, and then figure out what you can do to solve each strange case.
First of all, as has been pointed out, get all the data into your external tables. Avoid doing any type checking or casting at this case. just stick to VARCHAR fields (and perhaps NUMBER but even here I tend to use VARCHAR). Then, in a second mapping, try to get your data in to a staging table and there you can isolate the bad records, and perhaps allow for human interaction to correct the data if all else fails.
Regarding the files, you have before mentioned that delta loads are what you have to do. Unless you are running your loads in a "live" system where data is flowing in on and off during daily operations, then going for a whole refresh would make your life so much better (20 mins of loading during the night is quite a good deal, if it makes your loading logic so much simpler).
But, as you have said before, you need delta loads.
Using UNIX shell scripts, you can use the 'diff' command to create your datafiles. That should be quite straight forward. Otherwise you can perhaps use staging tables and the MINUS operator. There isn't always a quick solution.
Borkur
Similar Messages
-
Handling Multiple Records in a file adapter
Hi All,
My source file message :
<Source_MT>
<Records>....<b>0-Unbound</b>
<Country>THAILAND</country>
</Records>
<Records>....<b>0-Unbound</b>
<Country>ANGOLA</country>
</Records>
</Source_MT>
Target Message:
<Target_MT>
<Employees>....<b>0-Unbound</b>
<Country>THAILAND</country>
</Employees>
</Traget_MT>
Now in my scenario ,source message will have multiple records with different countries........
I want to send these records to different receivers on the basis of this coutry field,means after target message mapping....all the records corresponding to THAILAND must be sent to THA_RECVR system..all the records corresponding to ANGOLA must be sent to Angola recvr systems...
Please help me to club target message records on the basis of country field and then send it to correspodning receiver systems ......
Should I use BPM?Shweta,
You need to do enhanced receiver determination. You can handle this by determining them during runtime.
Check this out : Re: Condition In Receiver Determination Not Working [Page 3 : My reply]
raj. -
How to handle the bad record while using bulk collect with limit.
Hi
How to handle the Bad record as part of the insertion/updation to avoid the transaction.
Example:
I am inserting into table with LIMIT of 1000 records and i've got error at 588th record.
i want to commit the transaction with 588 inserted record in table and log the error into
error logging table then i've to continue with transaction with 560th record.
Can anyone suggest me in this case.
Regards,
yuva>
How to handle the Bad record as part of the insertion/updation to avoid the transaction.
>
Use the SAVE EXCEPTIONS clause of the FORALL if you are doing bulk inserts.
See SAVE EXCEPTIONS in the PL/SQL Language doc
http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/tuning.htm
And then see Example 12-9 Bulk Operation that continues despite exceptions
>
Example 12-9 Bulk Operation that Continues Despite Exceptions
-- Temporary table for this example:
CREATE TABLE emp_temp AS SELECT * FROM employees;
DECLARE
TYPE empid_tab IS TABLE OF employees.employee_id%TYPE;
emp_sr empid_tab;
-- Exception handler for ORA-24381:
errors NUMBER;
dml_errors EXCEPTION;
PRAGMA EXCEPTION_INIT(dml_errors, -24381);
BEGIN
SELECT employee_id
BULK COLLECT INTO emp_sr FROM emp_temp
WHERE hire_date < '30-DEC-94';
-- Add '_SR' to job_id of most senior employees:
FORALL i IN emp_sr.FIRST..emp_sr.LAST SAVE EXCEPTIONS
UPDATE emp_temp SET job_id = job_id || '_SR'
WHERE emp_sr(i) = emp_temp.employee_id;
-- If errors occurred during FORALL SAVE EXCEPTIONS,
-- a single exception is raised when the statement completes.
EXCEPTION
-- Figure out what failed and why
WHEN dml_errors THEN
errors := SQL%BULK_EXCEPTIONS.COUNT;
DBMS_OUTPUT.PUT_LINE
('Number of statements that failed: ' || errors);
FOR i IN 1..errors LOOP
DBMS_OUTPUT.PUT_LINE('Error #' || i || ' occurred during '||
'iteration #' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
DBMS_OUTPUT.PUT_LINE('Error message is ' ||
SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
END LOOP;
END;
DROP TABLE emp_temp; -
How do I skip footer records in Data file through control file of sql*loade
hi,
I am using sql*loader to load data from data file and i have written control file for it. How do i skip last '5' records of data file or the footer records to be skiped to read.
For first '5' records to be skiped we can use "skip" to achieve it but how do i acheive for last '5' records.
2)
Can I mention two data files in one control file if so what is the syntax(like we give INFILE Where we mention the path of data file can i mention two data file in same control file)
3)
If i have datafile with variable length (ie 1st record with 200 charcter, 2nd with 150 character and 3rd with 180 character) then how do i load data into table, i mean what will be the syntax for it in control file.
4)if i want to insert sysdate into table through control file how do i do it.
5) If i have variable length records in data file and i have first name then white space between then and then last name, how do i insert this value which includes first name and last name into single column of the table.( i mean how do you handle the white space in between first name and last name in data file)
Thanks in advance
ramYou should read the documentation about SQL*Loader.
-
How to handle duplicate messages in J2SE File scenario
Hi,
Is there any way to handle processing of duplicate messages in J2SE File adapter scenario?
Here is the scenario ,
Steps :
1. Engine picks up a message and checks the size of it.
2. Before reaching the checking interval , the adapter(file) was terminated unfortunately.
3. J2SE engine was restarted.
4. Previous file was again picked up and sent as the first time with one msgID.
5. After sometime, same file was picked up with a new msgID
6. System gets only ONE configrmation that file has been successfully transfered.
So we find two messages containing the same file.
I have checked in J2SE doc, there is a parameter called "db.exactlyOnceErrorInPendingState" which is related to DB.
Is there any similar parameter to handle the duplicate messages in FIle Adapter scenarios of J2SE Engine.
Please help me in this regard as it seems to be a new thing in J2SE AE.
Regards,
Soorya.Hi swarup,
But using OS command how can we rename /archive that file?
Module means any Custom Module or wat?
Following are the channel configurations used in File To File scenario.
File Sender :
version=30
mode=XMB2FILE
XI.httpPort=58201
XI.httpService=/test
XI.ReceiverAdapterForService=test_rcv
file.numberOfMessageTypes=1
XI.Interface=MI_test_out
XI.InterfaceNamespace=http://nestle.com/test
file.type=BIN
file.targetDir=/test_inb
file.targetFilename=unusedbutreq
file.writeMode=fromHeader.ext
file.createDir=0
file.nestleName=initial
file.writeMode=fromHeader.ext
file.nestleEXT=test
file.nestleFileOverwrite=False
File Receiver :
version=30
mode=FILE2XMB
XI.TargetURL=http://localhost:58201/test
XI.NestleTargetURL=http://localhost:58201/test
file.type=BIN
file.checkFileModificationInterval=300000
file.pollInterval=300
file.processingMode=archiveWithTimestamp
file.archiveDir=/test_out/arc
XI.QualityOfService=EO
file.numberOfMessageTypes=1
file.messageAttributes=name
XI.SenderService=test_snd
XI.Interface=MI_test_out
XI.InterfaceNamespace=http://nestle.com/test
XI.ReceiverService=test_rcv
file.nestleBadMsgDir=/test_out/bad
file.sourceDir=/test_out
file.sourceFilename=.
Do the needful help asap.
Regards,
Prakash. -
In call transaction how can we handle error records.
hai..tellme how can we handle error records in call transaction..we can get error records into one flatfile ok...then we have to execute the same bdc program with dat flat file (error records) once again...is it wright ..
Hi,
check this program u will get idea
seee this prog. its will help u .
copy and run this prog.
TYPES : BEGIN OF t_disp ,
vendorno(9),
compcc(13),
purchorg(14),
accgroup(15),
title(7),
name(5),
country(8),
ordcurr(14),
END OF t_disp.
TYPES : BEGIN OF t_err,
msgtyp LIKE bdcmsgcoll-msgtyp,
l_mstring(250),
END OF t_err.
DATA: i_disp TYPE STANDARD TABLE OF t_disp,
wa_disp TYPE t_disp,
i_err TYPE STANDARD TABLE OF t_err,
wa_err TYPE t_err.
data definition
Batchinputdata of single transaction
DATA: bdcdata LIKE bdcdata OCCURS 0 WITH HEADER LINE.
messages of call transaction
DATA: messtab LIKE bdcmsgcoll OCCURS 0 WITH HEADER LINE.
error session opened (' ' or 'X')
DATA: e_group_opened.
message texts
TABLES: t100.
PARAMETER : p_file1 LIKE ibipparms-path,
p_cmode LIKE ctu_params-dismode DEFAULT 'N'.
"A: show all dynpros
"E: show dynpro on error only
"N: do not display dynpro
*selction screen.
AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file1.
CALL FUNCTION 'F4_FILENAME'
EXPORTING
program_name = syst-cprog
dynpro_number = syst-dynnr
IMPORTING
file_name = p_file1.
AT SELECTION-SCREEN ON p_file1.
IF p_file1 IS INITIAL.
MESSAGE 'FILE IS NOT FOUND' TYPE 'E'.
ENDIF.
START-OF-SELECTION.
PERFORM f_disp_file1.
END-OF-SELECTION.
PERFORM f_disp_errs.
*& Form F_DISP_FILE1
text
--> p1 text
<-- p2 text
FORM f_disp_file1 .
DATA: l_filename1 TYPE string.
MOVE p_file1 TO l_filename1.
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
filename = l_filename1
filetype = 'ASC'
has_field_separator = 'X'
HEADER_LENGTH = 0
READ_BY_LINE = 'X'
DAT_MODE = ' '
CODEPAGE = ' '
IGNORE_CERR = ABAP_TRUE
REPLACEMENT = '#'
CHECK_BOM = ' '
IMPORTING
FILELENGTH =
HEADER =
TABLES
data_tab = i_disp
EXCEPTIONS
FILE_OPEN_ERROR = 1
FILE_READ_ERROR = 2
NO_BATCH = 3
GUI_REFUSE_FILETRANSFER = 4
INVALID_TYPE = 5
NO_AUTHORITY = 6
UNKNOWN_ERROR = 7
BAD_DATA_FORMAT = 8
HEADER_NOT_ALLOWED = 9
SEPARATOR_NOT_ALLOWED = 10
HEADER_TOO_LONG = 11
UNKNOWN_DP_ERROR = 12
ACCESS_DENIED = 13
DP_OUT_OF_MEMORY = 14
DISK_FULL = 15
DP_TIMEOUT = 16
OTHERS = 17
IF sy-subrc 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
*? prepare BDC data
DELETE i_disp INDEX 1.
LOOP AT i_disp INTO wa_disp .
PERFORM bdc_dynpro USING 'SAPMF02K' '0100'.
PERFORM bdc_field USING 'BDC_CURSOR'
'RF02K-KTOKK'.
PERFORM bdc_field USING 'BDC_OKCODE'
'/00'.
PERFORM bdc_field USING 'RF02K-LIFNR'
wa_disp-vendorno.
'ztest_1'.
PERFORM bdc_field USING 'RF02K-BUKRS'
wa_disp-compcc.
'0001'.
PERFORM bdc_field USING 'RF02K-EKORG'
wa_disp-purchorg.
'0001'.
PERFORM bdc_field USING 'RF02K-KTOKK'
wa_disp-accgroup.
'0001'.
PERFORM bdc_dynpro USING 'SAPMF02K' '0110'.
PERFORM bdc_field USING 'BDC_CURSOR'
'LFA1-SPRAS'.
PERFORM bdc_field USING 'BDC_OKCODE'
'=VW'.
PERFORM bdc_field USING 'LFA1-ANRED'
wa_disp-title.
'Mr.'.
PERFORM bdc_field USING 'LFA1-NAME1'
wa_disp-name.
'test name'.
PERFORM bdc_field USING 'LFA1-SORTL'
'TEST NAME'.
PERFORM bdc_field USING 'LFA1-LAND1'
wa_disp-country.
'in'.
PERFORM bdc_field USING 'LFA1-SPRAS'
'en'.
PERFORM bdc_dynpro USING 'SAPMF02K' '0120'.
PERFORM bdc_field USING 'BDC_CURSOR'
'LFA1-KUNNR'.
PERFORM bdc_field USING 'BDC_OKCODE'
'=VW'.
PERFORM bdc_dynpro USING 'SAPMF02K' '0130'.
PERFORM bdc_field USING 'BDC_CURSOR'
'LFBK-BANKS(01)'.
PERFORM bdc_field USING 'BDC_OKCODE'
'=VW'.
PERFORM bdc_dynpro USING 'SAPMF02K' '0210'.
PERFORM bdc_field USING 'BDC_CURSOR'
'LFB1-AKONT'.
PERFORM bdc_field USING 'BDC_OKCODE'
'=VW'.
PERFORM bdc_dynpro USING 'SAPMF02K' '0215'.
PERFORM bdc_field USING 'BDC_CURSOR'
'LFB1-ZTERM'.
PERFORM bdc_field USING 'BDC_OKCODE'
'=VW'.
PERFORM bdc_dynpro USING 'SAPMF02K' '0220'.
PERFORM bdc_field USING 'BDC_CURSOR'
'LFB5-MAHNA'.
PERFORM bdc_field USING 'BDC_OKCODE'
'=VW'.
PERFORM bdc_dynpro USING 'SAPMF02K' '0310'.
PERFORM bdc_field USING 'BDC_CURSOR'
'LFM1-WAERS'.
PERFORM bdc_field USING 'BDC_OKCODE'
'=UPDA'.
PERFORM bdc_field USING 'LFM1-WAERS'
wa_disp-ordcurr.
'inr'.
PERFORM bdc_transaction USING 'XK01'.
WRITE:/ WA_DISP-VendorNo,
WA_DISP-COMPCC,
WA_DISP-PURCHORG,
WA_DISP-ACCGROUP,
WA_DISP-title,
WA_DISP-name,
WA_DISP-country,
WA_DISP-ORDCURR.
CLEAR: wa_disp.
REFRESH bdcdata.
ENDLOOP.
ENDFORM. " F_DISP_FILE1
Start new screen *
FORM bdc_dynpro USING program dynpro.
CLEAR bdcdata.
bdcdata-program = program.
bdcdata-dynpro = dynpro.
bdcdata-dynbegin = 'X'.
APPEND bdcdata.
ENDFORM. "BDC_DYNPRO
Insert field *
FORM bdc_field USING fnam fval.
IF FVAL NODATA.
CLEAR bdcdata.
bdcdata-fnam = fnam.
bdcdata-fval = fval.
APPEND bdcdata.
ENDIF.
ENDFORM. "BDC_FIELD
*& Form bdc_transaction
text
-->P_0322 text
FORM bdc_transaction USING tcode.
DATA: l_mstring(480),
l_subrc LIKE sy-subrc.
REFRESH messtab.
CALL TRANSACTION tcode USING bdcdata
MODE p_cmode
UPDATE 'L'
MESSAGES INTO messtab.
l_subrc = sy-subrc.
IF SMALLLOG 'X'.
WRITE: / 'CALL_TRANSACTION',
TCODE,
'returncode:'(I05),
L_SUBRC,
'RECORD:',
SY-INDEX.
LOOP AT messtab.
SELECT SINGLE * FROM t100 WHERE sprsl = messtab-msgspra
AND arbgb = messtab-msgid
AND msgnr = messtab-msgnr.
IF sy-subrc = 0.
l_mstring = t100-text.
IF l_mstring CS '&1'.
REPLACE '&1' WITH messtab-msgv1 INTO l_mstring.
REPLACE '&2' WITH messtab-msgv2 INTO l_mstring.
REPLACE '&3' WITH messtab-msgv3 INTO l_mstring.
REPLACE '&4' WITH messtab-msgv4 INTO l_mstring.
ELSE.
REPLACE '&' WITH messtab-msgv1 INTO l_mstring.
REPLACE '&' WITH messtab-msgv2 INTO l_mstring.
REPLACE '&' WITH messtab-msgv3 INTO l_mstring.
REPLACE '&' WITH messtab-msgv4 INTO l_mstring.
ENDIF.
CONDENSE l_mstring.
WRITE: / messtab-msgtyp, l_mstring(250).
*? Send this errors to err internal table
wa_err-msgtyp = messtab-msgtyp.
wa_err-l_mstring = l_mstring.
APPEND wa_err TO i_err.
ELSE.
WRITE: / messtab.
ENDIF.
CLEAR: messtab, wa_err.
ENDLOOP.
SKIP.
ENDIF.
ENDFORM. " bdc_transaction
*& Form f_disp_errs
text
--> p1 text
<-- p2 text
FORM f_disp_errs .
SORT i_err BY msgtyp.
LOOP AT i_err INTO wa_err.
AT FIRST.
WRITE : / text-002.
ULINE.
ENDAT.
AT NEW msgtyp.
IF wa_err-msgtyp = 'S'.
WRITE : / text-003.
ULINE.
ELSEIF wa_err-msgtyp = 'E'.
WRITE : / text-001.
ULINE.
ENDIF.
ENDAT.
WRITE : / wa_err-msgtyp, wa_err-l_mstring.
CLEAR wa_err.
ENDLOOP.
ENDFORM. " f_disp_errs
Regards -
Split records into two files based on lookup table
Hi,
I'm new to ODI and want to know on how I could split records into two files based on a value in one of the columns in the table.
Example:
Table:
my columns are
account name country
100 USA
200 USA
300 UK
200 AUS
So from the 4 records I maintain list of countries in a lookup file and split the records into 2 different files based on values in the file...
Say I have records AUS and UK in my lookup file...
So my ODI routine should send all records with country into file1 and rest to file2.
So from above records
File1:
300 UK
200 AUS
File2:
100 USA
200 USA
Can you help me how to achieve this?
Thanks,
Sam1. where and how do i create filter to restrict countries? In source or target? Should I include some kind of filter operator in interface.
You need to have the Filter on the Source side so that we can filter records accordingly the capture the same in the File. To have a Filter . In the source data store click and drag the column outside the data store and you will have Cone shaped icon and now you can click and type the Filter.
Please look into this link for ODI Documentation -http://www.oracle.com/technetwork/middleware/data-integrator/documentation/index.html
Also look into this Getting started guide - http://download.oracle.com/docs/cd/E15985_01/doc.10136/getstart/GSETL.pdf . You can find information as how to create Filter in this guide.
2. If I have include multipe countries like (USA,CANADA,UK) to go to one file and rest to another file; Can I use some kind of lookup file...? Instead of modifying filter inside interface...Can i Update entries in the file?
there are two ways of handling your situation.
Solution 1.
1. Create Variable Country_Variable
2. Create a Filter in the Source datastore in the First Interface ( SOURCE.COLUMN = #Country_Variable)
3. Create a new Package Country File Unload
4. Call the Variable in Country_Variable in Set Mode and provide the Country (USA )
5. Next call the First Interface
6. Next call the Second Interface where the Filter condition will be ( SOURCE.COLUMN ! = #Country_Variable )
7. Now run the package .
Solution 2.
If you need a solution to handle through Filer.
1. Use this Method (http://odiexperts.com/how-to-refresh-odi-variables-from-file-%E2%80%93-part-1-%E2%80%93-just-one-value ) to call the File where you wish to create store the country name into the variable Country_Variable
2. Pretty much the same Create a Filter in the Source datastore in the First Interface ( SOURCE.COLUMN = #Country_Variable)
3.Create a new Package Country File Unload
4.Next call the Second Interface where the Filter condition will be ( SOURCE.COLUMN ! = #Country_Variable )
5. Now run the package .
Now through this way using File you can control the File.
Please try and let us know , if you need any other help. -
How to generate bad records for varchar2 type
Hi
Iam using sqlldr to load records from a flat file to a table.
Table columns contains varchar2 datatype
Data is loaded by fixedlength position.
There is no not null restriction on the fields.
I want to have a bad file generated.
Can anyone let me know how to get the bad file?
What type of data can be modified in a file to get a bad record.
Thanks in advancehow to get the bad file?Make the field length longer than its corresponding DB field length?
I.e. when your DB field length is varchar2(10) make one of the fields in the file greater than 10. -
Handling error records in LSMW
Hi All,
How to handle error records in LSMW.
Out 20 records had, I had 4 record and 10 record captured in error log.
How to process them?
Thanks
SonaliHi Sonali,
You have to re-run LSMW for those records which were recorded in error log. Make sure that the new flat file is error free.
There is no other alternative to this.
Hope this solves your problem.
Regards,
Brajvir -
Urgent- How to separate bad records from load and put into a separate table
We have an error handling requirement in ODI 11g from the client that whenever a bad record is encountered the execution flow shud not stop rather it shud separate those records into an error table so that at the end of the load we shud be left with all the records (except bad records) in the target table and those bad records shud be there in a separate error table.
The definition of the bad records may include the size of a column or datatype mismatch between source and target table. How to implement this error handling strategy in ODI or is there any out of box solution that we can leverage Please Help.
Thanks & Regards,
SBV
Edited by: user13133733 on Dec 23, 2011 4:45 AMHi SBV,
Please find my responses below,
I have tried the steps suggested, however i have some doubts:
1. What all data exceptions (e.g. primary key constraint violation etc.) are covered in this mechanism?Yes you can handle PK,FK, Check constraints violations etc., using CKM.
2. If there is a column size mismatch between source and target table will this work? (I think not because i tried it and it'll give error before populating the I$ table, because I$ is created according to the source).
You are right column size mismatches will not be captured as a part of default CKM property.
Also i am getting an error in the creation of SNP_CHECK_TAB step. In my case ODI is by default making a query like "create table .SNP_CHECK_TAB" , now this dot (.) before SNP_CHECK_TAB is making it an invalid table name and hence this step is a warning (not an error), but in the next step (delete previous checksum) it is throwing an error as this step is also looking for .SNP_CHECK_TAB table which is not there.
Please help me where the issue lies. I have NO idea why it is making that query by default I have freshly impoted the CKM Oracle and used it.
This is coz there is no DEFAULT physical schema not defined at your target data server.
Go to, Topology Manager-> Phy architecture -> <Your Technology>-> <Your target data server>-> expand, open up your physical schema and check DEAFULT.
Thanks,
Guru -
Bad Record MAC. exception while using urlConnection.getInputStream()
All,
In my SAP J2EE instance, from filter I am trying to do get data from an url (protocol is https).
For this I am using urlInstance.openConnection() after that urlConnection.getInputStream() and then reading from this input stream.
When I use http protocol, everything works fine. But with https, during urlConnection.getInputStream(), I get an exception saying "Bad Record MAC".
I tried to execute the same code in a standalone java program. It went fine with both http and https.
I get this error only when I run in SAP J2EE instance and only with https.
From the exception trace, I can see that while running in J2ee engine, the URLConnection instance is org.w3c.www.protocol.http.HttpURLConnection.
When I run a standalone program from command prompt, the instance is com.ibm.net.ssl.www.protocol.https.r. This runs without any issues.
I understand that these instances are different due to different URLStreamHandlerFactory instances.
But I didnt set the factory instance in either of the programs.
1. Is the exception I get is simply bcoz of instance org.w3c.www.protocol.http.HttpURLConnection?
2. In that case how can I change the factory instance in j2ee engine?
Please help.
Thanks.
Edited by: Who-Am-I on Nov 28, 2009 7:54 AMMay be my question is too complex to understand. Let me put it simple.
After upgrading to NW 7.01 SP5, an existing communication between SAP J2EE engine and a 3rd party software is not working.
After a lot of debuggin I came to know that its happening at the filter we implemented to route the requests through the 3rd party authentication system.
At the filter exception is coming from org.w3c.www.protocol.http.HttpURLConnection.getInputStream.
This instance of HTTPURLConnection given by protocol handler factory(implementation of URLStreamHandlerFactory) which SAP is considering by default. I want to replace this protocol handler with another handler as I see no problems running the same program outside SAP J2EE instance.
Can we change this protocol handler factory? Where can we change it? -
Cannot process links - int error: bad record index
Opening a publication I have recently been working on suddenly fails with "Cannot process publication's links - Internal error - bad record index".
And then Pagemaker dies. Is there anyway to get this publication back? Hours of work.
Only possibly anomaly I see is that the .pmd file was moved from one computer to another before this started happening, but we do this all the time in our shop. As a matter of fact it was moved on a memory stick, not as an email attachment or some less reliable means. The version on the stick fails the same way.Well that sounds close, but I (1) don't have any graphics on master pages, and (2) never performed a "diagnostic recomposition", and (3) can't do anything anyway since pagemaker crashes immediately after issuing the error message about the links.
-
Memory Management, Crit Structure Corrupt, Bad Pool Caller, ntfs file system error
So after freshly installing Win 8.1 I was happy, untill I started to play some games that I previously played on Win 7, I've started to get a whole heap of errors, I did have more written down but have lost the bit of paper they were written on.
Here are the list of errors I can remember.
Memory Management, Crit Structure Corrupt, Bad Pool Caller, ntfs file system error
Specs:
Operating System
Windows 8.1 Pro 64-bit
CPU
AMD Athlon II X2 250
35 °C
Regor 45nm Technology
RAM
6.00GB Dual-Channel DDR2 @ 400MHz (5-5-5-18)
Motherboard
ASUSTeK Computer INC. M3N-H/HDMI (Socket AM2 )
40 °C
Graphics
SyncMaster (1440x900@60Hz)
SAMSUNG (1440x900@60Hz)
ATI Radeon HD 4800 Series (Microsoft Corporation - WDDM v1.1) (Sapphire/PCPartner)
Storage
698GB Western Digital WDC WD7500AAKS-00RBA0 ATA Device (SATA)Adam
These are Related to
atikmdag.sys ATI Radeon Kernel Mode Driver. Yours is 2+ years old. I would update to the newest driver available.
Microsoft (R) Windows Debugger Version 6.3.9600.16384 AMD64
Copyright (c) Microsoft Corporation. All rights reserved.
Loading Dump File [C:\Users\Ken\Desktop\A040614-65203-01.dmp]
Mini Kernel Dump File: Only registers and stack trace are available
************* Symbol Path validation summary **************
Response Time (ms) Location
Deferred SRV*H:\symbols*http://msdl.microsoft.com/download/symbols
Symbol search path is: SRV*H:\symbols*http://msdl.microsoft.com/download/symbols
Executable search path is:
Windows 8 Kernel Version 9600 MP (2 procs) Free x64
Product: WinNt, suite: TerminalServer SingleUserTS
Built by: 9600.16384.amd64fre.winblue_rtm.130821-1623
Machine Name:
Kernel base = 0xfffff803`be013000 PsLoadedModuleList = 0xfffff803`be2da9b0
Debug session time: Mon Apr 7 00:35:03.785 2014 (UTC - 4:00)
System Uptime: 0 days 0:08:48.334
Loading Kernel Symbols
Loading User Symbols
Loading unloaded module list
* Bugcheck Analysis *
Use !analyze -v to get detailed debugging information.
BugCheck 3B, {c0000005, fffff80001cf7d73, ffffd0002305c470, 0}
*** WARNING: Unable to verify timestamp for atikmdag.sys
*** ERROR: Module load completed but symbols could not be loaded for atikmdag.sys
Probably caused by : atikmdag.sys ( atikmdag+70d73 )
Followup: MachineOwner
0: kd> !analyze -v
* Bugcheck Analysis *
SYSTEM_SERVICE_EXCEPTION (3b)
An exception happened while executing a system service routine.
Arguments:
Arg1: 00000000c0000005, Exception code that caused the bugcheck
Arg2: fffff80001cf7d73, Address of the instruction which caused the bugcheck
Arg3: ffffd0002305c470, Address of the context record for the exception that caused the bugcheck
Arg4: 0000000000000000, zero.
Debugging Details:
EXCEPTION_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s.
FAULTING_IP:
atikmdag+70d73
fffff800`01cf7d73 8a040a mov al,byte ptr [rdx+rcx]
CONTEXT: ffffd0002305c470 -- (.cxr 0xffffd0002305c470;r)
rax=0000000000000000 rbx=ffffcf80021783a0 rcx=ffffcf8002334ff5
rdx=0000307ffdccb010 rsi=ffffcf8004734470 rdi=ffffcf80021783a0
rip=fffff80001cf7d73 rsp=ffffd0002305cea8 rbp=ffffd0002305cf00
r8=0000000000000006 r9=0000000000000000 r10=0000000000000384
r11=ffffcf8002334ff0 r12=0000000000000001 r13=ffffcf8004734470
r14=0000000000000000 r15=ffffcf8004734470
iopl=0 nv up ei ng nz na po nc
cs=0010 ss=0018 ds=002b es=002b fs=0053 gs=002b efl=00010286
atikmdag+0x70d73:
fffff800`01cf7d73 8a040a mov al,byte ptr [rdx+rcx] ds:002b:00000000`00000005=??
Last set context:
rax=0000000000000000 rbx=ffffcf80021783a0 rcx=ffffcf8002334ff5
rdx=0000307ffdccb010 rsi=ffffcf8004734470 rdi=ffffcf80021783a0
rip=fffff80001cf7d73 rsp=ffffd0002305cea8 rbp=ffffd0002305cf00
r8=0000000000000006 r9=0000000000000000 r10=0000000000000384
r11=ffffcf8002334ff0 r12=0000000000000001 r13=ffffcf8004734470
r14=0000000000000000 r15=ffffcf8004734470
iopl=0 nv up ei ng nz na po nc
cs=0010 ss=0018 ds=002b es=002b fs=0053 gs=002b efl=00010286
atikmdag+0x70d73:
fffff800`01cf7d73 8a040a mov al,byte ptr [rdx+rcx] ds:002b:00000000`00000005=??
Resetting default scope
CUSTOMER_CRASH_COUNT: 1
DEFAULT_BUCKET_ID: VERIFIER_ENABLED_VISTA_MINIDUMP
BUGCHECK_STR: 0x3B
PROCESS_NAME: csrss.exe
CURRENT_IRQL: 0
ANALYSIS_VERSION: 6.3.9600.16384 (debuggers(dbg).130821-1623) amd64fre
LAST_CONTROL_TRANSFER: from fffff800025cf3d7 to fffff80001cf7d73
STACK_TEXT:
ffffd000`2305cea8 fffff800`025cf3d7 : ffffcf80`04734470 ffffcf80`04734470 00000000`00000000 ffffcf80`01f52450 : atikmdag+0x70d73
ffffd000`2305ceb0 ffffcf80`04734470 : ffffcf80`04734470 00000000`00000000 ffffcf80`01f52450 ffffd000`2305cf00 : atikmdag+0x9483d7
ffffd000`2305ceb8 ffffcf80`04734470 : 00000000`00000000 ffffcf80`01f52450 ffffd000`2305cf00 fffff800`025f48d3 : 0xffffcf80`04734470
ffffd000`2305cec0 00000000`00000000 : ffffcf80`01f52450 ffffd000`2305cf00 fffff800`025f48d3 ffffcf80`04734470 : 0xffffcf80`04734470
FOLLOWUP_IP:
atikmdag+70d73
fffff800`01cf7d73 8a040a mov al,byte ptr [rdx+rcx]
SYMBOL_STACK_INDEX: 0
SYMBOL_NAME: atikmdag+70d73
FOLLOWUP_NAME: MachineOwner
MODULE_NAME: atikmdag
IMAGE_NAME: atikmdag.sys
DEBUG_FLR_IMAGE_TIMESTAMP: 4fdf9bbd
STACK_COMMAND: .cxr 0xffffd0002305c470 ; kb
FAILURE_BUCKET_ID: 0x3B_VRF_atikmdag+70d73
BUCKET_ID: 0x3B_VRF_atikmdag+70d73
ANALYSIS_SOURCE: KM
FAILURE_ID_HASH_STRING: km:0x3b_vrf_atikmdag+70d73
FAILURE_ID_HASH: {1ec4cdaf-008d-03dd-4ca9-ade1993441da}
Followup: MachineOwner
Wanikiya and Dyami--Team Zigzag -
How does the filesystem handle bad blocks?
Let's suppose your hard drive develops a bad sector on it. What happens?
I remember early products where the OS would constantly try to store new files right on top of the bad block, find the file couldn't be written/verified, then would write it elsehere... then the next time it would do the same thing again, expecting the block to "heal" I guess
What happens on MacOS (extended, journaled) if it's writing a file and finds a bad block in freespace? Does the OS know how to "cocoon off" or tag the bad block as bad and not use it?
Now what happens if the bad block develops underneath an active file? What's the recovery pattern? Or is there none - does it just leave the block bad permanently? Is there any way to fix that?Boot your computer from the DVD that came with it, and use Disk Utility to check the status of the disk. Select the disk and look at the S.M.A.R.T. status at the bottom. If it displays anything other than Verified, then the disk isn't going to last much longer.
Modern hard disks don't develop "bad sectors" that the OS needs to handle. The disk drive itself automatically remaps bad sectors to spare unused sectors in a way that is transparent to the OS. When too many of those bad sectors are remapped, the S.M.A.R.T. status will report an error.
What you're describing is a failing disk. It's more than likely the early stages of a total failure, rather than just a bunch of bad sectors. Same thing happened to my MacBook a few months ago. After about two hours, it failed completely. I picked up a new drive on the way home, and restored it from Time Machine overnight. The only data that I lost was an iTunes purchase that I made that morning (iTunes Support let me download it again). You have a backup, right?
It's also possible that you've either run out of free space on the startup disk, or you have a process running that's consuming too much CPU and/or RAM resources. If that's the case, a reboot will clear it (until whatever caused it happens again). -
Sqlldr error510 Physical record in data file is longer than the max 1048576
SQL*Loader: Release 10.2.0.2.0 - Production on Fri Sep 21 10:15:31 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Control File: /apps/towin_p/bin/BestNetwork.CTL
Data File: /work/towin_p/MyData.dat
Bad File: /apps/towin_p/bin/BestNetwork.BAD
Discard File: none specified
(Allow all discards)
Number to load: ALL
Number to skip: 0
Errors allowed: 50
Continuation: none specified
Path used: Direct
Load is UNRECOVERABLE; invalidation redo is produced.
Table "BN_ADM"."DWI_USAGE_DETAIL", loaded from every logical record.
Insert option in effect for this table: APPEND
TRAILING NULLCOLS option in effect
Column Name Position Len Term Encl Datatype
USAGE_DETAIL_DT FIRST * , DATE MM/DD/YYYY HH24:MI:SS
UNIQUE_KEY SEQUENCE (MAX, 1)
LOAD_DT SYSDATE
USAGE_DETAIL_KEY NEXT * , CHARACTER
RATE_AREA_KEY NEXT * , CHARACTER
UNIT_OF_MEASURE_KEY NEXT * , CHARACTER
CALL_TERMINATION_REASON_KEY NEXT * , CHARACTER
RATE_PLAN_KEY NEXT * , CHARACTER
CHANNEL_KEY NEXT * , CHARACTER
SERIALIZED_ITEM_KEY NEXT * , CHARACTER
HOME_CARRIER_KEY NEXT * , CHARACTER
SERVING_CARRIER_KEY NEXT * , CHARACTER
ORIGINATING_CELL_SITE_KEY NEXT * , CHARACTER
TERMINATING_CELL_SITE_KEY NEXT * , CHARACTER
CALL_DIRECTION_KEY NEXT * , CHARACTER
SUBSCRIBER_LOCATION_KEY NEXT * , CHARACTER
OTHER_PARTY_LOCATION_KEY NEXT * , CHARACTER
USAGE_PEAK_TYPE_KEY NEXT * , CHARACTER
DAY_OF_WEEK_KEY NEXT * , CHARACTER
FEATURE_KEY NEXT * , CHARACTER
WIS_PROVIDER_KEY NEXT * , CHARACTER
SUBSCRIBER_KEY NEXT * , CHARACTER
SUBSCRIBER_ID NEXT * , CHARACTER
SPECIAL_NUMBER_KEY NEXT * , CHARACTER
TOLL_TYPE_KEY NEXT * , CHARACTER
BILL_DT NEXT * , DATE MM/DD/YYYY HH24:MI:SS
BILLING_CYCLE_KEY NEXT * , CHARACTER
MESSAGE_SWITCH_ID NEXT * , CHARACTER
MESSAGE_TYPE NEXT * , CHARACTER
ORIGINATING_CELL_SITE_CD NEXT * , CHARACTER
TERMINATING_CELL_SITE_CD NEXT * , CHARACTER
CALL_ACTION_CODE NEXT * , CHARACTER
USAGE_SECONDS NEXT * , CHARACTER
SUBSCRIBER_PHONE_NO NEXT * , CHARACTER
OTHER_PARTY_PHONE_NO NEXT * , CHARACTER
BILLED_IND NEXT * , CHARACTER
NO_USERS_IN_CALL NEXT * , CHARACTER
DAP_NO_OF_DSAS_USED NEXT * , CHARACTER
USAGE_SOURCE NEXT * , CHARACTER
SOURCE_LOAD_DT NEXT * , DATE MM/DD/YYYY HH24:MI:SS
SOURCE_UPDATE_DT NEXT * , DATE MM/DD/YYYY HH24:MI:SS
RATE_PLAN_ID NEXT * , CHARACTER
NETWORK_ELEMENT_KEY NEXT * , CHARACTER
SQL string for column : "-2"
SQL*Loader-510: Physical record in data file (/work/towin_p/MyData.dat) is longer than the maximum(1048576)
SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
Table "BN_ADM"."DWI_USAGE_DETAIL":
0 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Date conversion cache disabled due to overflow (default size: 1000)
Bind array size not used in direct path.
Column array rows : 5000
Stream buffer bytes: 256000
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records read: 7000382
Total logical records rejected: 0
Total logical records discarded: 0
Total stream buffers loaded by SQL*Loader main thread: 1666
Total stream buffers loaded by SQL*Loader load thread: 4996
Run began on Fri Sep 21 10:15:31 2007
Run ended on Fri Sep 21 10:27:14 2007
Elapsed time was: 00:11:43.56
CPU time was: 00:05:36.81What options are you using on the CTL file? How does your data file looks like (e.g. One line per record, one line only)?
Maybe you are looking for
-
Updation of table after Print and Print-Preview in smartform
Hi, I am having a requirement where i need to print multiple deliveries with the sales order data each delivery on a new page using smartform. Now , i am using SSF_OPEN, SSF_CLOSE Fm's for the same. I need to update a Z table after the user clicks on
-
As the title says, this is a brand new iPod touch bought from a store, I plugged it in a commenced the "setting up" process through iTunes on may iMac. It got to the stage where it said allow this computer to access the device (or something along tho
-
Hi Experts! I am having confusion among these 3 services provided in Oracle ADF. When to use and what service needs to be used? Once we generate Webservice WSDL. With the WSDL we have 3 options in Jdeveloper to invoke this service. 1. Java Web Servi
-
Unable to cancel busines completion
Hi Guus Order is closed, Status is CLSD, Iam unable to cancel this status , When I go to function--> cancel the business completion system throws an error as Technically complete not allowed. I know its not possible , is thier any way to cancel th
-
Negative sign not updated at DSO level
Hi Experts, In Source, few purchase records contains negative Quentity. The same values are transfered at PSA. But where as in DSO, the negative values are not getting updated. i.e., the Quentity field is transfered as positive. Coul d any one let m