CRLF issue
Hi,
We are gearing up for our integration testing with our 30 odd EDI partners. But we have a small subset who are using Carriage Return Line feed hexa decimal values ( 0D 0A) as delimiters in our inbound documents which might potentially cause our BIC mappings to fails. Since it is a single map for all our partners, I would like to handle it at PI level. I have read the SEEBURGER manuals and came across the following settings in our sender comm. Channels. Would this be a right approach, whereby we keep our BIC mappings intact and accommodate the partner delimiter differences at a communication channel level?
binaryMode true
replaceString "270D0A"
searchString "270A"
Module Name : localejbs/Seeburger/ReplaceString
Module Key : rst
Modulekey Parametername ParameterValue
rst sourceDest MainDocument
rst targetDest MainDocument
rst searchString "Search string"
rst replaceString "replace string"
rst regularExpression true
Thanks,
Teresa
We have started out testing and the first thing that hit us was on the incoming 997 document. We got the following error u201CBICMODULE:Temporary error: BIC XI Adapter call failed. Reason: InhouseDocReader doSyntaxCheck(): offset[262]: the found segment S is not in the message description. DESCRIPTION: InhouseDocReader Error: The Segment S is missing in the message descriptionu201D. I checked the payload with the implementation guide and it looks good. The ISA has its IEA, GS the GE and ST has the SE. In between we have the AK segments. Any advise?
Teresa
Similar Messages
-
Hi Team,
we are using Microsoft report viewer in one of our projects and recently our code is scanned by Veracode
and identified CRLF Injection issues with respect to improper nuetralization.
please find below the issue reported :
This call to system_web_dll.System.Web.HttpResponse.set_ContentType() contains an HTTP response splitting flaw. Writing unsanitized user-supplied input into an HTTP header allows an attacker
to manipulate the HTTP response rendered by the browser, leading to cache poisoning and cross-site scripting attacks. The first argument to set_ContentType() contains tainted data. The tainted data originated from an earlier call to system_dll.system.net.httpwebrequest.endgetresponse.
Remove unexpected carriage returns and line feeds from user-supplied data used to construct HTTP response headers. Whenever possible, use a security library such as ESAPI that provides safe
versions of addHeader(), etc. that will automatically remove unexpected carriage returns and line feeds and can be configured to use HTML entity encoding for non-alphanumeric data. Only write custom blacklisting code when absolutely necessary. Always validate
user-supplied input to ensure that it conforms to the expected format, using centralized data validation routines when possible.
References:
CWE (http://cwe.mitre.org/data/definitions/113.html)
OWASP (http://www.owasp.org/index.php/HTTP_Response_Splitting)
WASC (http://webappsec.pbworks.com/HTTP-Response-Splitting)Is there any fix for the above veracode issue?
I too have a similar kind of issue:-
Improper Neutralization of CRLF Sequences in HTTP Headers ('HTTP Response Splitting')
Description
A function call contains an HTTP response splitting flaw. Writing unsanitized user-supplied input into an HTTP header
allows an attacker to manipulate the HTTP response rendered by the browser, leading to cache poisoning and crosssite
scripting attacks.
Recommendations
Remove unexpected carriage returns and line feeds from user-supplied data used to construct an HTTP response.
Always validate user-supplied input to ensure that it conforms to the expected format, using centralized data validation
routines when possible.
Instances found via Static Scan
MvcReportViewer.dll void SetStreamingHeaders(string,
System.Web.HttpResponse) -
KIMYONG : basic Export / Attachment issues 가이드
Purpose
======
이 문서는 Support Analayst / DBA에게 Export /Attachments issues 발생시
조치할수 있는 기본적인 Troubleshooting Guide를 소개하고자 합니다.
Explanations
======
Export Analysis
Turn on export debug,
Go to Help -> Diagnostics -> Examine
Set Block = GLOBAL
Set Field = FND_EXPORT_DEBUG
Set Value = TRUE
Then export and observe the messages that are generated during the export process
Important Parameters.
set serveroutput on
declare
plsql_agent varchar2(200);
web_server varchar2(200);
dad varchar2(200);
gfm_agent varchar2(200);
protocol varchar2(200);
database_id varchar2(200);
jsp_agent varchar2(200);
check_enabled varchar2(200) ;
begin
plsql_agent := fnd_web_config.plsql_agent ;
dbms_output.put_line('PL SQL Agent ->'||plsql_agent);
web_server :=fnd_web_config.web_server ;
dbms_output.put_line('Web Server ->'||web_server);
dad := fnd_web_config.dad ;
dbms_output.put_line('DAD ->'||dad);
gfm_agent := fnd_web_config.gfm_agent ;
dbms_output.put_line('GFM Agent ->'||gfm_agent);
protocol := fnd_web_config.protocol ;
dbms_output.put_line('Protocol ->'||protocol);
database_id := fnd_web_config.database_id ;
dbms_output.put_line('Database Id ->'||database_id);
jsp_agent := fnd_web_config.jsp_agent ;
dbms_output.put_line('JSP Agent ->'||jsp_agent);
check_enabled := fnd_web_config.check_enabled('FND_GFM.GET') ;
dbms_output.put_line('FND_GFM.GET ->'||check_enabled);
end ;
Examining SQL Trace for the sequence of events that happen in the Export process
SQL >alter session set events '10046 trace name context forever, level 12';
Then run the following block of pl/sql code
set serveroutput on
declare
db_file number;
mime_type varchar2(255) :='text/plain' ;
out_string varchar2(32767) :='Just some plain text that is stored' ;
web_server_prefix varchar2(500);
url varchar2(500);
begin
db_file :=fnd_gfm.file_create(content_type =>mime_type,program_name=>'export');
fnd_gfm.file_write_line(db_file,out_string);
db_file :=fnd_gfm.file_close(db_file);
url:=fnd_gfm.construct_download_url(fnd_web_config.gfm_agent,db_file,TRUE);
dbms_output.put_line(url);
end;
Exit the sql plus session and study the sql trace file as being there in USER_DUMP_DEST
$ ls -lrt
Refer to Note # 282806.1 Performance Tuning Approach for Oracle(8.1.6 - 9.2.0.5) on
UNIX for more information on how to obtain sql tracing .
Example of download URL :-
http://finance.sriratu:8001/pls/SR/fndgfm/fnd_gfm.get/776537528/202595/fnd_gfm.tsv
http://aoltest2.idc.oracle.com:8000/pls/VIS/fndgfm/fnd_gfm.get/820067633/298941/Screen_shots.doc
Example of Upload Attachment URL:
http://aoltest2.idc.oracle.com:8000/pls/VIS/OracleSSWA.Execute?
E=%7B!2DAF44968EBBEC83211B5D5F27F58334FBFB2B90E38AD205&P=%7B!BEFD8114A932C86A1548EC73FFCF6EADB4F7826B217EDCE92719B62BDA9FF0AF193DC7BC64A2C60AFC5123B50C8C78F9E6807695ED9A7FE7AE87F8E49E80807223756706B3FC777F645FA5A07C7A467B
http://aoltest2.idc.oracle.com:8000/pls/VIS/OracleSSWA.Execute?
E=%7B!2DAF44968EBBEC83211B5D5F27F58334FBFB2B90E38AD205&P=%7B!BEFD8114A932C86A5525987DB9C8D9785657497306AAE1FD25D1CC352ADF38DFD69C21355096CBC38D285B083D24F261701F5F278E199044D603A5A8B1D588292099782AC4AF3D97E23B95936809D280
To check the row being created in the table FND_LOBS during Export or Attachment
SQL>create table fnd_lobs_bak as
select file_id,file_name from fnd_lobs ;
SQL>select * from fnd_lobs
where file_id not in
(select file_id from fnd_lobs_bak );
SQL>select * from fnd_lobs
where to_char(upload_date,'DD/MM/YYYY')=to_char(sysdate,'DD/MM/YYYY')
Analysis on an Attachment
Help -> Diagnostics -> Examine
Block : DOCUMENT_HEADER
Field : ATTACHED_DOCUMENT_ID
Note down <Value>
SQL>select document_id
from fnd_attached_documents
where attached_document_id=<Value>;
SQL>select media_id
from fnd_documents_tl
where document_id=<document_id>;
SQL>select *
from fnd_lobs
where file_id=<media_id>;
SQL>select *
from fnd_documents_short_text
where media_id=<media_id>;
from fnd_documents_long_text
where media_id=<media_id>;
SQL>select *
from fnd_documents_long_raw
where media_id=<media_id>;
FND_LOBS stores information about all LOBs managed by the Generic File Manager (GFM).
Each row includes the file identifier, name, content-type, and actual data. Each row also
includes the dates the file was uploaded and will expire, the associated program name and
tag, and the language and Oracle characterset.
The file data, which is a binary LOB, is stored exactly as it is uploaded from a client browser,
which means that no translation work is required during a download to make it HTTP compliant.
Therefore uploads from non-browser sources will have to prepare the contents
appropriately (for instance, separating lines with CRLF).
The program_name and program_tag may be used by clients of the GFM for any purpose,
such as striping, partitioning, or purging the table if the program is de-installed.
They are otherwise strictly informative.
These columns and the expiration date are properly set when the
procedure FND_GFM.CONFIRM_UPLOAD is called. If not called, the column
expiration_date remains set, and will eventually be purged by the procedure
FND_GFM.PURGE_EXPIRED.
FND_DOCUMENTS_LONG_RAW stores images and OLE
Objects, such as Word Documents and Excel
spreadsheets, in the database. If the user elects
to link an OLE Object to the document, this table
stores the information necessary for Oracle Forms
to activate the OLE server, and it saves a
bit-mapped image of the OLE server's contents.
If the user does not elect to link an OLE Object,
the entire document will be stored in this table.
FND_DOCUMENTS_LONG_TEXT stores information about
long text documents.
FND_DOCUMENTS_SHORT_TEXT stores information about
short text documents.
To know which Forms provide Attachment feature
SQL>select *
from fnd_attachment_functions
where function_name like '%FND_%';
Examining FND_LOBS tablespace
SQL>select tablespace_name
from dba_tables
where table_name='FND_LOBS';
SQL>select *
from fnd_profile_options_tl
where profile_option_name='FND_EXPORT_MIME_TYPE';
SQL>select a.tablespace_name TABLESPACE_NAME , a.bytes TOTAL_BYTES,
sum(b.bytes) FREE_BYTES , count(*) EXTENTS
from dba_data_files a, dba_free_space b
where a.file_id = b.file_id AND A.TABLESPACE_NAME=<TABLESPACE_NAME>
group by a.tablespace_name, a.bytes
order by a.tablespace_name ;
Examing Profile Option value
SQL>select *
from fnd_profile_options_tl
where profile_option_name='FND_EXPORT_MIME_TYPE' ;
SQL>select b.profile_option_name,level_id,profile_option_value
from fnd_profile_option_values a, fnd_profile_options b
where a.application_id=b.application_id
and a.profile_option_id=b.profile_option_id
and b.profile_option_name in ('FND_EXPORT_MIME_TYPE') ;
Procedure FND_GFM.GET ANALYSIS
http://aoltest2.idc.oracle.com:8000/pls/VIS/fndgfm/fnd_gfm.get/560074272/298951/fnd_gfm.doc
access
SQL>select substr('/560074272/298951/fnd_gfm.doc',instr('/560074272/298951/fnd_gfm.doc','/',1)+1,instr('/560074272/298951/fnd_gfm.doc','/',2)-2) access from dual ;
560074272
file_id
SQL>select substr('/560074272/298951/fnd_gfm.doc',instr('/560074272/298951/fnd_gfm.doc','/',2)+1,(instr('/560074272/298951/fnd_gfm.doc','/',-1)-instr('/560074272/298951/fnd_gfm.doc','/',2)-1)) from dual ;
298951
Profile Options being referenced in the package FND_GFM
FND_EXPORT_MIME_TYPE
FND_NATIVE_CLIENT_ENCODING
Lookup Type Being used in the package FND_GFM
SQL>select tag,lookup_code,meaning
from fnd_lookup_values_vl
where lookup_type='FND_ISO_CHARACTER_SET_MAP';
Reference
========
Note 338651.1 -
Content Conversion headerline with CRLF not working
I am using content conversion in my receiver channel. I have a header line (addHeaderLine=3 and headerLine=xxxxx). My xml.endSeparator='0x0D''0x0A'. However, the header line is not getting the '0x0D''0x0A'. It appears to only be getting the LF but not the CRLF that is requires for a Windows system. This causes issues because it concatenates the first line of data to the end of the header line. I have read of others having this same issue on SDN, but I haven't seen a solution except to separate the header in the receiver structure and map the header values in the message mapping. Is there any other way?
Check
http://help.sap.com/saphelp_nw70/helpdata/en/d2/bab440c97f3716e10000000a155106/frameset.htm
Instead of '0x0D''0x0A' you can use 'nl' directly.
Also check the syntax of parameter name
NameA.addHeaderLine = 3
NameA.headerLine = xxxxx
NameA.endSeparator = 'nl'
Regards
Raj -
ICloud IMAP server does not send the CAPABILITY with CRLF
IMAP iCloud server does not send the CAPABILITY response with CRLF appended as per RFC 3501. Please find the log snippet
11-05 10:50:52.462 29603 29988 D Email : open :: socket openjava.io.BufferedInputStream@43a726a8 | java.io.BufferedOutputStream@43a72b68
11-05 10:50:52.502 29603 29988 D Email : <<< #null# ["OK", ["CAPABILITY", "st11p00mm-iscream023", "1S", "XAPPLEPUSHSERVICE", "IMAP4", "IMAP4rev1", "SASL-IR", "AUTH=ATOKEN", "AUTH=PLAIN"], "iSCREAM ready to rumble (1S:1092) st11p00mm-iscream023 [42:4469:15:50:53:39]"]
11-05 10:50:52.502 29603 29988 D Email : >>> 1 CAPABILITY
11-05 10:50:52.552 29603 29988 D Email : <<< #null# ["CAPABILITY", "st11p00mm-iscream023", "1S", "XAPPLEPUSHSERVICE", "IMAP4", "IMAP4rev1", "SASL-IR", "AUTH=ATOKEN", "AUTH=PLAIN"]
11-05 10:50:52.562 29603 29988 D Email : <<< #1# ["OK", "!!"]
11-05 10:50:52.582 29603 29988 D Email : >>> [IMAP command redacted]
11-05 10:50:52.682 29603 29988 D Email : <<< #2# ["OK", ["CAPABILITY", "XAPPLEPUSHSERVICE", "IMAP4", "IMAP4rev1", "ACL", "QUOTA", "LITERAL+", "NAMESPACE", "UIDPLUS", "CHILDREN", "BINARY", "UNSELECT", "SORT", "CATENATE", "URLAUTH", "LANGUAGE", "ESEARCH", "ESORT", "THREAD=ORDEREDSUBJECT", "THREAD=REFERENCES", "CONDSTORE", "ENABLE", "CONTEXT=SEARCH", "CONTEXT=SORT", "WITHIN", "SASL-IR", "SEARCHRES", "XSENDER", "X-NETSCAPE", "XSERVERINFO", "X-SUN-SORT", "ANNOTATE-EXPERIMENT-1", "X-UNAUTHENTICATE", "X-SUN-IMAP", "X-ANNOTATEMORE", "XUM1", "ID", "IDLE"], "User test logged in"]
11-05 10:50:52.682 29603 29988 D Email : >>> 3 CAPABILITY
11-05 10:50:52.742 29603 29988 W Email : Exception detected: Expected 000a (
)1-05 10:50:52.742 29603 29988 W Email : ) but got 000d (
This is happening only when CAPABILITY command is sent follwed by LOGIN command. Please check this issue.If you want your mail delivered properly the Official Host Name of the sending server should match the PTR (reverse DNS) of the sending IP Address, and there should be an "A" record that matches the OHN as well.
Example:
mail.yourdomain.com (Official Host Name) on 123.123.123.123
PTR for 123.123.123.123 should match mail.yourdomain.com
There should be an A record in yourdomain.com pointing to 123.123.123.123
Kostas -
Base64Binary OSB and File Adapter Issue
Hi all,
I am converting an xml to flat file after that i want to write the flatfile to a file using File Adapter in OSB.
So using java callout we converted the binary content to base64 string and we are able to write the data using file adapter .
But the file contains as extra line in between each line which my legacy system wont accept.
I printed the base64 string in the output file and when i decode that file using the website as safe decode as text i am able to get the correct file.
So where is the problem ? File Adapter ? Why i am getting extrace line in between each line . How to write safely using file adapter.
If i use file transport of OSB i am able to write without any issues but i want to write dynamic location so i am looking for file transport of OSB.
Thanks
PhaniHi Anju,
Thanks for the response . If i decode the base64 binary data using this website if i use Notepad++ i cant see an extra line in the notepad++ it showing as below
Its a fixed length file 1 to 513 then next line so for the file transport it showing as below
ISA ............................
1 to 513 CRLF
For file adapter it is showing as below in the same notepad++ editor
ISA..........................
1 to 513 CR
CRLF
an extra CR so an extra line .
I used MFL and converted the xml to non-xml ( flat file fixed length file) for file transport i am i created another proxy message type Text .It is working fine
Added to the above problem i have another few questions on the file transport
How can append to an exiting file using File transport ?
How to change the File directory dynamically ? -
Arabic characters issue in smtp email csv attachment
Hi all,
the below email extracts the output of a query and sends it as an attachment in csv format.
but the arabic characters are coming as question marks in the attachment, can someone kindly help, below is hte code
procedure test is
smtp UTL_SMTP.connection;
reply UTL_SMTP.reply;
csvContent clob;---added
procedure W( line varchar2 default null ) is
begin
UTL_SMTP.write_data(
smtp,
line || utl_tcp.CRLF
end;
begin
smtp := UTL_SMTP.open_connection('test.domain.com',25);
--// IMPORTANT: specify the hostname of the plaform sending the mail!
UTL_SMTP.helo( smtp,'test.domain.com');
UTL_SMTP.mail( smtp,'[email protected]' );
UTL_SMTP.rcpt( smtp,'[email protected]' );
UTL_SMTP.open_data( smtp );
--// mail header
W( 'MIME-Version: 1.0' );
W( 'Content-Type: multipart/mixed; boundary="----_=_NextPart_001_01C87DCB.CD85F300"');
W( 'Subject: test' );
W( 'From: test' );
--// mail text body
W();
W( '------_=_NextPart_001_01C87DCB.CD85F300' );
W( 'Content-Transfer-Encoding: 8bit' );
W( 'Content-Type: text/plain' );
W( 'Charset: AL32UTF8' );
W();
W( 'Dear All' );
W();
W( 'test' );
W();
--// mail attachment
W();
W( '------_=_NextPart_001_01C87DCB.CD85F300' );
W( 'Content-Disposition: attachment; filename="test.csv"' );
W( 'Content-Type: text/plain' );
W( 'Charset: AL32UTF8' );
W();
W( 'EMPNO'||','||'ENAME );
for c in( SELECT EMPNO,ENAME FROM EMPLOYEES ) loop
W( c.EMPNO||','||c.ENAME );
end loop;
W( '------_=_NextPart_001_01C87DCB.CD85F300' );
UTL_SMTP.close_data( smtp );
UTL_SMTP.quit( smtp );
end;thanks srini, i am using microsoft excel,
i think it is a character conversion issue, i had tested by sending the below message without charset=AL32UTF8 and arabic was displayed as junk, after giving the value of charset its working.
begin
utl_mail.send(
sender => '[email protected]',
recipients => '[email protected]',
subject => 'Subject',
message => 'إلاق فرع (ح',
mime_type => 'text; charset=AL32UTF8');
end;I think in the code i am not passing charset value correctly, can you kindly tell me where i can give the charset in the below code
procedure test is
smtp UTL_SMTP.connection;
reply UTL_SMTP.reply;
csvContent clob;---added
procedure W( line varchar2 default null ) is
begin
UTL_SMTP.write_data(
smtp,
line || utl_tcp.CRLF
end;
begin
smtp := UTL_SMTP.open_connection('test.domain.com',25);
--// IMPORTANT: specify the hostname of the plaform sending the mail!
UTL_SMTP.helo( smtp,'test.domain.com');
UTL_SMTP.mail( smtp,'[email protected]' );
UTL_SMTP.rcpt( smtp,'[email protected]' );
UTL_SMTP.open_data( smtp );
--// mail header
W( 'MIME-Version: 1.0' );
W( 'Content-Type: multipart/mixed; boundary="----_=_NextPart_001_01C87DCB.CD85F300"');
W( 'Subject: test' );
W( 'From: test' );
--// mail text body
W();
W( '------_=_NextPart_001_01C87DCB.CD85F300' );
W( 'Content-Transfer-Encoding: 8bit' );
W( 'Content-Type: text/plain' );
W( 'Charset: AL32UTF8' );
W();
W( 'Dear All' );
W();
W( 'test' );
W();
--// mail attachment
W();
W( '------_=_NextPart_001_01C87DCB.CD85F300' );
W( 'Content-Disposition: attachment; filename="test.csv"' );
W( 'Content-Type: text/plain' );
W( 'Charset: AL32UTF8' );
W();
W( 'EMPNO'||','||'ENAME );
for c in( SELECT EMPNO,ENAME FROM EMPLOYEES ) loop
W( c.EMPNO||','||c.ENAME );
end loop;
W( '------_=_NextPart_001_01C87DCB.CD85F300' );
UTL_SMTP.close_data( smtp );
UTL_SMTP.quit( smtp );
end; -
Issue of Text Qualiffier after upgrading to SQL Server 2014 Standard
Hi,
I'm facing an issue of Text Qualifier after I upgrading my system to SQL Server 2014 Standard Edition (Microsoft SQL Server 2014 - 12.0.2000.8 (X64)
Feb 20 2014 20:04:26
Copyright (c) Microsoft Corporation
Standard Edition (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)
I have a csv file with format: Text Qualifier is double pipes ||, Vertical Bar (|) as Column Delimiter and Row Delimiter is CRLF.
For example file:
||Col1|||||Col2||
||ABC|||||XYZ||
This file is working OK in SQL Server 2008 R2 (SP2) but it does not work in SQL Server 2014. It raised error message while running.
Error: An error occurred while skipping data rows.
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on MediaAgencyGroup returned error code 0xC0202091. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure
code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
Any suggestion here?
Thanks,Yes
There's a difference in the way text qualifiers are parsed from 2012 onwards
see
http://www.proactivespeaks.com/2012/06/22/my-5-favorite-sql-2012-ssis-features-so-far/
http://blog.concentra.co.uk/2013/06/24/ssis-2012-flat-files-now-greatly-improved-but-are-they-good-enough-yet/
ANyways my preferred way to do this is as below
http://visakhm.blogspot.in/2014/06/ssis-tips-handling-embedded-text.html
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page -
Oracle PLSQL Send mail Issue.
Hi All,
I used to send some emails using oracle PLSQL code in both 10 g and 11g.But now days mail is getting triggered without any body in both the 10g and 11g versions.
Some mails are triggering perfectly.I am unable to sort out the issue, please help me in this concern.Code that i have used is:
create or replace
PROCEDURE Sendmail(V_MAILHOST VARCHAR2,V_FROM VARCHAR2,V_TO clob ,V_CC VARCHAR2, V_SUBJECT VARCHAR2,V_MSGTEXT VARCHAR2,V_MODULE VARCHAR2) AS
l_mail_conn UTL_SMTP.connection;
l_subject VARCHAR2(100):='test mail';
l_msg_text VARCHAR2(500):='hi , testing alert mail';
v_reply utl_smtp.reply;
L_RECIPIENTS clob;
l_recipients1 clob;
V_TO1 clob;
v_errmsg VARCHAR2(500);
mul_recip NUMBER;
mul_recip1 NUMBER;
slen NUMBER:=1;
slen1 NUMBER:=1;
V_errcode VARCHAR2(500);
BEGIN
l_mail_conn:= UTL_SMTP.open_connection(v_mailhost,25);
v_reply :=UTL_SMTP.helo(l_mail_conn, v_mailhost);
v_reply :=UTL_SMTP.mail(l_mail_conn, v_from);
-- V_TO1:=null;--'[email protected]';
SELECT INSTR(V_TO,',') INTO mul_recip FROM dual;
IF mul_recip =0
THEN
utl_smtp.rcpt(l_mail_conn, V_TO );
ELSE
WHILE(INSTR(V_TO,',',slen) > 0)
LOOP
l_recipients := SUBSTR(V_TO, slen, INSTR(SUBSTR(V_TO,slen),',')-1);
slen := slen+INSTR(SUBSTR(V_TO, slen),',');
utl_smtp.rcpt(l_mail_conn, l_recipients);
END LOOP;
l_recipients := SUBSTR(V_TO, slen);
utl_smtp.rcpt(l_mail_conn, l_recipients);
END IF;
IF V_CC IS NOT NULL
THEN
SELECT INSTR(V_CC,',') INTO mul_recip1 FROM dual;
IF mul_recip1 =0
THEN
utl_smtp.rcpt(l_mail_conn, V_CC );
ELSE
WHILE(INSTR(V_CC,',',slen1) > 0)
LOOP
l_recipients1 := SUBSTR(V_CC, slen1, INSTR(SUBSTR(V_CC,slen1),',')-1);
slen1 := slen1+INSTR(SUBSTR(V_CC, slen1),',');
utl_smtp.rcpt(l_mail_conn, l_recipients1);
END LOOP;
l_recipients1 := SUBSTR(V_CC, slen1);
utl_smtp.rcpt(l_mail_conn, l_recipients1);
END IF;
END IF;
v_reply :=utl_smtp.open_data(l_mail_conn );
utl_smtp.write_data(l_mail_conn, 'From: ' || V_FROM || utl_tcp.crlf);
utl_smtp.write_data(l_mail_conn, 'Subject: ' || v_subject || utl_tcp.crlf);
utl_smtp.write_data(l_mail_conn, 'To: ' || V_TO || utl_tcp.crlf);
utl_smtp.write_data(l_mail_conn, 'CC: ' || V_CC || utl_tcp.crlf);
utl_smtp.write_data(l_mail_conn, 'Content-Type: text/html' || utl_tcp.crlf);
UTL_SMTP.WRITE_DATA(L_MAIL_CONN, V_MSGTEXT );
if length(V_TO)>=4000 then
dbms_output.put_line('none');
else
insert into MAIL_LOG(MAILFROM, MAILTO, SUBJECT, LOGGED_DATE, MAIL_STATUS, MAILCC, MAILTEXT,MODULE_ID)
VALUES(V_FROM,v_to,V_SUBJECT,SYSDATE,'sent',V_CC,V_MSGTEXT,V_MODULE);
end if;
utl_smtp.close_data(l_mail_conn );
utl_smtp.quit(l_mail_conn);
EXCEPTION
WHEN utl_smtp.transient_error OR utl_smtp.permanent_error THEN
BEGIN
UTL_SMTP.QUIT(l_mail_conn);
EXCEPTION
WHEN UTL_SMTP.TRANSIENT_ERROR OR UTL_SMTP.PERMANENT_ERROR THEN
NULL; -- When the SMTP server is down or unavailable, we don't have
-- a connection to the server. The QUIT call will raise an
-- exception that we can ignore.
END;
v_errmsg:='Failed to send mail due to the following error: ' ||SQLERRM||' errcode '||SQLCODE||'from :'||V_FROM|| ' V_TO '||V_TO;
INSERT INTO MAIL_LOG(MAILFROM, MAILTO, SUBJECT, LOGGED_DATE, MAIL_STATUS, MAILCC, MAILTEXT,MODULE_ID)
VALUES(V_FROM,v_to,V_SUBJECT,SYSDATE,v_errmsg,V_CC,V_MSGTEXT,V_MODULE);
END Sendmail; -
Stretch with Overflow and CRLF's
I'm having a problem with the Stretch With Overflow,
specifically in relation to data that contains CRLF characters. It
seems to me that the rendering engine doesn't recognize the fact
that when it sees a CRLF, it should add on 1 line worth of space.
For example, if a query column has a few words in the first line,
two CRLF's then a few words in the third line, I only see the first
line.
I have a horrible workaround in place right now that goes
like this for the field expression:
Replace(query.columnName, chr(13), RepeatString(chr(5),100)
& chr(13),"All")
This works OK but produces inconsistent results, generating
more space than required for most instances, and not enough for
some.
Is there some better workaround that anyone's come up with?
Is this a known issue?
TIA,
JoeMy guess is that your problem is that your data only has a
carriage return character in it and no line feed. The report
builder doesn't seem to handle carriage returns too well, so if you
do Replace(query.columnName, chr(13), chr(10), "ALL") it might
solve your problem. -
OPEN SSL certificate generation issue--bpel email activity
Hi all,
I need to send a mail from bpel using email activity.
I made all settings changes.I downloaded OPENSSL software and I need to generate smtp ssl certificates?
But while generation of ssl certificates I am getting some issue
OpenSSL> openssl s_client -starttls smtp -crlf -connect smtp.gmail.com:465>
gmail.cert
openssl:Error: 'openssl' is an invalid command.
Standard commands
asn1parse ca ciphers crl crl2pkcs7
dgst dh dhparam dsa dsaparam
ec ecparam enc engine errstr
gendh gendsa genrsa nseq ocsp
passwd pkcs12 pkcs7 pkcs8 prime
rand req rsa rsautl s_client
s_server s_time sess_id smime speed
spkac verify version x509
Message Digest commands (see the `dgst' command for more details)
md2 md4 md5 rmd160 sha
sha1
Cipher commands (see the `enc' command for more details)
aes-128-cbc aes-128-ecb aes-192-cbc aes-192-ecb aes-256-cbc
aes-256-ecb base64 bf bf-cbc bf-cfb
bf-ecb bf-ofb cast cast-cbc cast5-cbc
cast5-cfb cast5-ecb cast5-ofb des des-cbc
des-cfb des-ecb des-ede des-ede-cbc des-ede-cfb
des-ede-ofb des-ede3 des-ede3-cbc des-ede3-cfb des-ede3-ofb
des-ofb des3 desx idea idea-cbc
idea-cfb idea-ecb idea-ofb rc2 rc2-40-cbc
rc2-64-cbc rc2-cbc rc2-cfb rc2-ecb rc2-ofb
rc4 rc4-40
Can any one suggest me ,What I entered is correct or not?how to generate smtp certificates?
Thanks in advance
KrishnaFabian,
Are you familiar with Firefox OS? The reason I say this is because the email client cannot create a certificate excaption. This is actually by design. This is by design: https://wiki.mozilla.org/Gaia/Email/Features#Security
This support request at Mozilla was placed specifically for the Firefox OS product, for which only a single email client exists.
That being said the good folks on the Mozilla Bugzilla, were able to show me how to look up another alias for these servers which does in fact work and does in fact match the SSL certificates. Though Dreamhost support could not provide me with said information, and said information does not in fact exist in the DreamHost wiki.
I find repeated insistance from Dreamhost represenatives that I should just live with SSL certificate exceptions, when there are actual valid server names in existence to match the certificates in question, rediculous.
The fact that you are posting this non solution for a product it isn't even applicable for is beyond unhelpful. It actually serves to muddy the waters. -
New DVR Issues (First Run, Channel Switching, etc.)
I've spent the last 30 minutes trying to find answers through the search with no luck, so sorry if I missed something.
I recently switched to FIOS from RCN cable in New York. I've gone through trying to setup my DVR and am running into issues and was hoping for some answers.
1. I setup two programs to record at 8PM, I was watching another channel at the time and only half paying attention. Around 8:02 I noticed a message had popped up asking if I would like to switch channels to start recording. I was expecting it to force it to switch like my old DVR, but in this case it didn't switch and I missed the first two minutes of one of the shows. I typically leave my DVR on all day and just turn off the TV, this dual show handling will cause issues with that if I forget to turn off the DVR. Is there a setting I can change that will force the DVR to choose one of the recording channels?
2. I setup all my recordings for "First Run" because I only want to see the new episodes. One show I setup was The Daily Show on comedy central, which is shown weeknights at 11pm and repeated 3-4 times throughout the day. My scheduled recordings is showing all these as planned recordings even though only the 11pm show is really "new". Most of the shows I've setup are once a week so they aren't a problem, but this seems like it will quickly fill my DVR. Any fixes?
Thanks for the help.
Solved!
Go to Solution.I came from RCN about a year ago. Fios is different in several ways, not all of them desirable. Here are several ways to get--and fix--unwanted recordings from a series recording setup.
Some general principles.
Saving changes. When you originally create a series with options, or if you go back to edit the options for an existing series, You MUST save the Series Options changes. Pretty much everywhere else in the user interface, when you change an option, the change takes effect immediately--but not in Series Options. Look at the Series Options window. Look at the far right side. There is a vertical "Save" bar, which you must navigate to and click OK on to actually save your changes. Exiting the Series Options window without having first saved your changes loses all your attempted changes--immediately.
Default Series Options. This is accessed from [Menu]--DVR--Settings--Default Series Options. This will bring up the series options that will automatically be applied to the creation of a NEW series. The options for every previously created series will not be affected by a subsequent modification of the Default Series Options. You should set these options to the way you would like them to be for the majority of series recordings that you are likely to create. Be sure to SAVE your changes. This is what you will get when you select "Create Series Recording" from the Guide. When creating a new series recording where you think that you may want options different from the default, select "Create Series with Options" instead. Series Options can always be changed for any individual series set up later--but not for all series at once.
Non-series recordings. With Fios you have no directly available options for these. With RCN and most other DVRs, you can change the start and end times for individual episodes, including individual episodes that are also in a series. With Fios, your workarounds are to create a series with options for a single program, then delete the series later; change the series options if the program is already in a series, then undo the changes you made to the series options later; or schedule recordings of the preceding and/or following shows as needed.
And now, to the unwanted repeats.
First, make sure your series options for the specific series in question--and not just the series default options--include "First Run Only". If not, fix that and SAVE. Then check you results by viewing the current options using the Series Manager app under the DVR menu.
Second, and most annoying, the Guide can have repeat programs on your channel tagged as "New". It happens. Set the series option "Air Time" to "Selected Time". To make this work correctly, you must have set up the original series recording after selecting the program in the Guide at the exact time of a first run showing (11pm, in your case), and not on a repeat entry in the Guide. Then, even it The Daily Show is tagged as New for repeat showings, these will be ignored.
Third, another channel may air reruns of the program in your series recording, and the first showing of a rerun episode on the other channel may be tagged as "New". These can be ignored in your series if you set the series option "Channel" to "Selected Channel". Related to this, if there is both an SD and HD channel broadcasting you series program, you will record them both if the series option "Duplicates" is set to "Yes". However, when the Channel option is set to "Selected Channel", the Duplicates Option is always effectively "No", regardless of what shows up on the options screen.
As for you missing two minutes, I have sereral instances in which two programs start recording at the same time. To the best of my recollection, whenever the warning message has appeared, ignoring it has not caused a loss of recording time. You might have an older software version. Newest is v.1.8. Look at Menu--Settings--System Info. Or, I might not have noticed the loss of minutes. I regularly see up to a minute of previous programming at the start of a recording, or a few missing seconds at the beginning or end of a recording. There are a lot of possibilities for that, but the DVR clock being incorrect is not one of them. With RCN, the DVR clocks occasionally drifted off by as much as a minute and a half. -
Pension issue Mid Month Leaving
Dear All,
As per rule sustem should deduct mid month joining/leaving/absences or transfer scenarios, the Pension/PF Basis will be correspondingly prorated. But our system is not doing this. In RT table i have found 3FC Pension Basis for Er c 01/2010 0.00 6,500.00.
Employee leaving date is 14.04.2010. system is picking pension amout as 541. Last year it was coming right.
Please suggest.
AshwaniDear Jayanti,
We required prorata basis pension in case of left employees and system is not doing this. This is the issue. As per our PF experts Pension amount should come on prorata basis for left employees in case they left mid of month.System is doing prorata basis last year but from this year it is deducting 541. I am giving two RT cases of different years.
RT table for year 2010. DOL 26.04.2010
/111 EPF Basis 01/2010 0.00 8,750.00
/139 VPF Basis 01/2010 0.00 8,750.00
/3F1 Ee PF contribution 01/2010 0.00 1,050.00
/3F3 Er PF contribution 01/2010 0.00 509.00
/3F5 Ee Mon PF contribution 01/2010 0.00 1,050.00
/3F6 Ee Ann PF contribution 01/2010 0.00 12,600.00
/3F9 PF adm chrgs * 1,00,00 01/2010 0.00 96.25
/3FA PF basis for Ee contri 01/2010 0.00 8,750.00
/3FB PF Basis for Er Contri 01/2010 0.00 8,750.00
/3FJ VPF basis for Ee contr 01/2010 0.00 8,750.00
/3FL PF Basis for Er Contri 01/2010 0.00 6,500.00
/3F4 Er Pension contributio 01/2010 0.00 541.00
/3FC Pension Basis for Er c 01/2010 0.00 6,500.00
/3FB PF Basis for Er Contri 01/2010 0.00 8,750.00
/3FC Pension Basis for Er c 01/2010 0.00 6,500.00
/3FJ VPF basis for Ee contr 01/2010 0.00 8,750.00
/3FL PF Basis for Er Contri 01/2010 0.00 6,500.00
/3R3 Metro HRA Basis Amount 01/2010 0.00 8,750.00
1BAS Basic Salary 01/2010 0.00 8,750.00
RT table for year 2009. DOL 27.10.2009
/111 EPF Basis 07/2009 0.00 9,016.13
/139 VPF Basis 07/2009 0.00 9,016.13
/3F1 Ee PF contribution 07/2009 0.00 1,082.00
/3F3 Er PF contribution 07/2009 0.00 628.00
/3F5 Ee Mon PF contribution 07/2009 0.00 1,082.00
/3F6 Ee Ann PF contribution 07/2009 0.00 8,822.00
/3F9 PF adm chrgs * 1,00,00 07/2009 0.00 99.18
/3FA PF basis for Ee contri 07/2009 0.00 9,016.00
/3FB PF Basis for Er Contri 07/2009 0.00 9,016.00
/3FJ VPF basis for Ee contr 07/2009 0.00 9,016.00
/3FL PF Basis for Er Contri 07/2009 0.00 5,452.00
/3FB PF Basis for Er Contri 07/2009 0.00 9,016.00
/3FC Pension Basis for Er c 07/2009 0.00 5,452.00
/3FJ VPF basis for Ee contr 07/2009 0.00 9,016.00
/3FL PF Basis for Er Contri 07/2009 0.00 5,452.00
/3R4 Non-metro HRA Basis Am 07/2009 0.00 9,016.13
1BAS Basic Salary 07/2009 0.00 9,016.13
Now please suggest what to do. where is the problem ? If have also checked EXIT_HINCALC0_002 but nothing written in it.
With Regards
Ashwani -
Open PO Analysis - BW report issue
Hello Friends
I constructed a query in BW in order to show Open Purchase Orders. We have custom DSO populated with standard
datasource 2lis_02_itm (Purcahse Order Item). In this DSO we mapped the field ELIKZ to the infoobject 0COMP_DEL
(Delivery completed).
We loaded the data from ECC system for all POs and found the following issue for Stock Transport Purchase orders (DocType = UB).
We have a PO with 4 line items. For line items 10 and 20, Goods issued, Goods received and both the flags "Delivery
complete" and "Final delivery" checked. For line items 30 and 40, only delivery indicator note is issued for zero
quantity and Delivery complete flag is checked (Final delivery flag is not checked) in ECC system. For this PO, the
delivery completion indicator is not properly updated in the DSO for line items 30 and 40. The data looks like the
following:
DOC_NUM DOC_ITEM DOCTYPE COMP_DEL
650000001 10 UB X
650000001 20 UB X
650000001 30 UB
650000001 40 UB
When we run the Open PO analysis report on BW side this PO is appearing in the report but the same is closed in ECC
system.
Any help is appreciated in this regard.
Thanks and Regards
sampathHi Priya and Reddy
Thanks for your response.
Yes the indicator is checked in EKPO table for items 30 and 40 and delta is running regularly for more than 1 year and no issues with other POs. This is happening only for few POs of type Stock Transport (UB).
I already checked the changes in ME23N and the Delivery completed indicator was changed and it reflected in EKPO table. Further, i checked the PSA records for this PO and i am getting the records with the Delivery completed flag but when i update from PSA to DSO the delivery completed indicator is not updating properly.
In PSA, for item 30 i have the following entries. Record number 42 is capturing the value X for ELIKZ but after that i am getting two more records 43 and 44 with process key 10 and without X for ELIKZ. I think this is causing the problem.
Record No. Doc.No. Item Processkey Rocancel Elikz
41 6500000001 30 11 X ---
42 6500000001 30 11 --- X
43 6500000001 30 10 X ---
44 6500000001 30 10 --- ---
(Here --- means blank)
Thanks and Regards
sampath -
HP LaserJet Enterprise 600 M602 driver issue
Hello,
I've got issue with 600-series printers. We use the latest UPD drivrer ver. 61.175.1.18849 and print from XenApp 6.5. The error occurs every time when users try to print jpg files from XenApp session. It only happens with 600 series printers and UPD.
Also I've tried to assign native 600-series driver ver. 6.3.9600.16384 and it works good. But with that driver system says that it's color printer and it brokes our printing reports. These reports are very important for us. So we can't use printer and that driver as well.
Printer installed on Windows Server 2012 R2. All clients are Windows 7 x64. XenApp Servers are Server 2008R2.
Is it possible to get fixed UPD driver or correct native driver for Server 2012 R2?
Regards,
AnatolyI am sorry, but to get your issue more exposure I would suggest posting it in the commercial forums since this is a commercial printer. You can do this at Printers - LaserJet.
Click on New Post.
I hope this helps.
Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
Click the “Kudos Thumbs Up" on the right to say “Thanks” for helping!
Gemini02
I work on behalf of HP
Maybe you are looking for
-
How does Time Machine handle disconnected external media drive ?
I have an external drive connected to my iMac, and backup to a Time Capsule. I prefer not to keep that external media drive connected all the time... When I disconnect that external media drive and Time Machine runs, does TM perceive this as "deleted
-
hi, i have requirement , i need to have four pushbuttons in the web report. when i execute each pushbutton , when i select 1 , it should execute query1 , and likewise if i am executing pushbutton-2 it should execute query2.... . is this possible in
-
Installed 11.0.3 today. Tried to update apps. iTunes response was, "You can not update this software since you have not owned the major version of this software." I REALLY don't understand. I paid for MANY applications and stole or borrowed NOTHI
-
Why is Firefox is running at 200,000+ KB on Windows 7/Windows XP?
My last machine was a Dell with 2 GB of ram, Pentium M processor, and Windows XP SP3. After working with an Intel dual-core, 4 GB ram, 250 GB hard drive, 2010 MSI notebook laptop with Windows 7 professional and limited programs running on it, my mach
-
Size control error in LightRoom
I discovered a malfunction in the Mac version of Lightroom 3. When I set any number as maximum file size all the EXIF data is wiped from the exported JPG. During my work I need to send photos resized to a given resolution and within a specified files