Char types size greater than 256 in DOE

What is the standard to use characters with types size greater than 256 characters in DOE (bapiwrapper)?

Use STRING for length greater than 256 characters. In DOE, you should select TEXT_MEMO checkbox while defining node attribute in Data Object.

Similar Messages

  • Passing variable of size greater than 32767 from Pro*C to PL/SQL procedure

    Hi,
    I am trying to pass a variable os size greater than 32767 from Pro*C to an SQL procedure.I tried assigning the host variable directly to a CLOB in the SQL section but nothing happens.In the below code the size of l_var1 is 33000.PROC_DATA is a procedure that takes CLOB as input and gives the other three(Data,Err_Code,Err_Msg) as output.These variables are declared globally.
    Process_Data(char* l_var1)
    EXEC SQL EXECUTE
    DECLARE
    l_clob clob;
    BEGIN
    l_clob := :l_var1
    PROC_DATA(l_clob,:Data,:Err_Code,:Err_Msg) ;
    COMMIT;
    END;
    END-EXEC;
    I also tried using DBMS_LOB.This was the code that i used.
    Process_Data(char* l_var1)
    EXEC SQL EXECUTE
    DECLARE
    l_clob clob;
    BEGIN
    DBMS_LOB.CREATETEMPORARY(l_clob,TRUE);
    DBMS_LOB.OPEN(l_clob,dbms_lob.lob_readwrite);
    DBMS_LOB.WRITE (l_clob, LENGTH (:l_var1), 1,:l_var1);
    PROC_DATA(l_clob,:Data,:Err_Code,:Err_Msg) ;
    COMMIT;
    END;
    END-EXEC;
    Here since DBMS_LOB packages allow a maximum of 32767,the value of l_var1 is not being assigned to l_clob.
    I am able to do the above process provided i split l_var1 into two variables and then append to l_clob using WRITEAPPEND.i.e l_var1 is 32000 in length and l_var2 contains the rest.
    Process_Data(char* l_var1,char* l_var2)
    EXEC SQL EXECUTE
    DECLARE
    l_clob clob;
    BEGIN
    dbms_lob.createtemporary(l_clob,TRUE);
    dbms_lob.OPEN(l_clob,dbms_lob.lob_readwrite);
    DBMS_LOB.WRITE (l_clob, LENGTH (:l_var1), 1,:l_var1);
    DBMS_LOB.WRITEAPPEND (l_clob, LENGTH(:l_var2), :l_var2);
    PROC_DATA(l_clob,:Data,:Err_Code,:Err_Msg) ;
    COMMIT;
    END;
    END-EXEC;
    But the above code requires dynamic memory allocation in Pro*C which i would like to avoid.Could you let me know if there is any other way to perform the above?

    Hi,
    The Long Datatype has been deprecated use Clob or Blob. This will solve lot of problems inherent with the datatype.
    Regards,
    Ganesh R

  • Index size greater than table size

    HI ,
    While checking the large segments , I came to know that index HZ_PARAM_TAB_N1 is larger than table HZ_PARAM_TAB . I think it's highly fragmented and requires defragmentation . Need your suggestion on the same that how can I collect more information on the same . Providing you more information .
    1.
    select sum(bytes)/1024/1024/1024,segment_name from dba_segments group by segment_name having sum(bytes)/1024/1024/1024 > 1 order by 1 desc;
    SUM(BYTES)/1024/1024/1024 SEGMENT_NAME
    81.2941895 HZ_PARAM_TAB_N1
    72.1064453 SYS_LOB0000066009C00004$$
    52.7703857 HZ_PARAM_TAB
    2. Index code
    <pre>
    COLUMN_NAME COLUMN_POSITION
    ITEM_KEY 1
    PARAM_NAME 2
    </pre>
    Regards
    Rahul

    Hi ,
    Thanks . I know that rebuild will defragment it . But as I'm on my new site , I was looking for some more supporting information before drafting the mail on the same that it requires re org activity .It's not possible for an index to have the size greater than tables as it contains only 2 columns values + rowid . Whereas tables contains 6 columns .
    <pre>
    Name      Datatype      Length      Mandatory      Comments
    ITEM_KEY      VARCHAR2      (240)      Yes      Unique identifier for the event raised
    PARAM_NAME      VARCHAR2      (2000)      Yes      Name of the parameter
    PARAM_CHAR      VARCHAR2      (4000)      
         Value of the parameter only if its data type is VARCHAR2.
    PARAM_NUM      NUMBER      
         Value of the parameter only if its data type is NUM.
    PARAM_DATE      DATE      
         Value of the parameter only if its data type is DATE.
    PARAM_INDICATOR      VARCHAR2      (3)      Yes      Indicates if the parameter contains existing, new or >replacement values. OLD values currently exist. NEW values create initial values or replace existing values.</pre>
    Regds
    Rahul

  • Parse an XML of size greater than 64k using DOM

    Hi,
    I had a question regarding limitation of parsing a file of size greater than 64k in Oracle 10g. Is the error "ORA-31167: XML nodes over 64K in size cannot be inserted" related to this ?
    One of the developers was telling that if we load an XML document of size greater than 64k into Oracle DOM, it will fail. Is 64k the size of the file or the size of text node in the XML?
    Is there a way we can overcome this limitation?
    I believe that Oracle 11g R1 documentation states that existing 64k limitation on the size of a text node has been eliminated. So if we use Oracle 11g, does it mean we can load XML files of size greater than 64K (or XML having text nodes of size greater than 64k)
    I am not well versed with XML. Please help me out.
    Thanks for your help.

    Search this forum for the ORA-error.
    Among others it will show the following: Node size
    In this case I think we can assured that "a future release" in 2006 was 11.1 as mentioned by Mark (= Sr Product Manager Oracle XML DB)

  • Java.exe sizes  greater than 350M , web report  often error

    HI , friends
    My ie is 8,and webi4.0.
    the  web report  file(universe)  has 63 reports,hundreds formulas,
    open the report java.exe sizes  greater than 350M.
    every time edit report ,only edit fews formulas....then the edit does not work.
    and  edit Data Access,or refresh  ,then error:  An error has occured.....(Screenshot)
    only to log off ,and shut down IE ...
    After a while open the IE, Sign in web report... ...again...
    I set the RAM as a virtual hard disk,and set up IE explorer  buffer memory to the NEW hard disk,
    but error still exists.
    please help me , thanks.

    Hi,
    On Windows 7, you may set the Java maximum Java heap size to 1536 MB in Java Control Panel -> Java -> Java Runtime Environment Settings, Runtime paramaters for both User and System.
    -Xmx1536m -Xincgc
    Note that
    depending on the desktop OS, the maximum Java heap size could vary, you'd need to test it and find out the ceiling to that OS.
    -Xincgc is to enable incremental garbage collection instead of waiting for whole lot chunk of garbage to be collected.
    Hope this helps,
    Jin-Chong

  • PUT Blobs of size greater than 5.5MB fail with HTTPS but not HTTP

    I have written a Cygwin app that uploads (using the REST API PUT operation) Block Blobs to my Azure storage account, and it works well for different size blobs when using HTTP. However, use of SSL (i.e. PUT using HTTPS) fails for Blobs greater than 5.5MB.
    Blobs less than 5.5MB upload correctly. Anything greater and I find that the TCP session (as seen by Wireshark) reports a dwindling window size that goes to 0 once the aforementioned number of bytes have been transferred. The failure is very repeatable and
    consistent. As a point of reference,  PUT operations against my Google/AWS/HP accounts work fine when using HTTPS for various object sizes, which suggests my problem is not in my client but specific to the HTTPS implementation on the MSAZURE storage servers. 
    If I upload the 5.5MB blob as two separate uploads of 4MB and 1.5MB followed by a PUT Block List, the operation succeeds as long as the two uploads used
    separate HTTPS sessions. Notice the emphasis on separate. That same operation fails if I attempt to maintain an HTTPS session across both uploads. This is another data point that seems to suggest that the Storage
    server has a problem 
    Any ideas on why I might be seeing this odd behavior that appears very specific to MS Azure HTTPS, but is not seen when used against AWS/Google/HP cloud storage servers?

    Hi,
    I'm getting this problem also when trying to upload blobs > 5.5mb using the Azure PHP SDK with HTTPS.
    There is no way I can find to get a blob > 5.5mb to upload, unless you use http, rather than https, which is not a good solution.
    I've written my own scripts to use the HTTP_Request2 library, to send the request as a test, and it fails with that also when using the 'socket' method.
    However, if I write a script using the PHP Curl extension directly, then it works fine, and blobs > 5.5mb get uploaded.
    It seems to be irrelevant which method is used, uploading in 1 go, or using smaller chunks, the PHP SDK seems broken.
    Also, I think I've found another bug in the SDK, when you do the smaller chunks, the assigning of the BlockID is not correct.
    In: WindowsAzure/Blob/BlobRestProxy.php
    Line: $block->setBlockId(base64_encode(str_pad($counter++, '0', 6)));
    That is incorrect usage of the str_pad function, and if you upload a huge blob that needs splitting, then the blockIDs will after a while become a different length and therefore fail.
    It should be: str_pad($counter++, 6, '0',STR_PAD_LEFT);
    I also think there is 1 too many base64_encodes() in there, as I think its being done twice, once in that line, and then again within the createBlobBlock() just before the send() for a 2nd time.
    Can someone please advice, when this/these bug(s) will be fixed in the PHP SDK, as at the moment its useless to me as I cant upload things securely.

  • How to check arraylist size greater than 1 using expression language in jsp

    I want to remove the scripplet in jsp, so i am using jstl tags with expression language in it.
    My scripplet is
    <% if (arraylist.size() > 1) {
    ---do something ----
    %>
    i wanted to change this to
    <c:if test="${ somecondition }">
    ---do something ----
    </c:if>
    here "somecondition" is exactly i need to check whether my arraylist size is greater than 1.
    so please can anyone help me how can i do that.

    If you do not mind, you can create a function and package it into a tag library of your own. Then you can use the function just as the existing expressions language construct. You may take a look at the article use functions in jsp expression language.

  • AS2 decryption error on file sizes greater than 5MB.

    We have a client who is not using biztalk but transmitting files to us via AS2. The AS2 file transmission occurs seamlessly when the file size is below 5MB, but Biztalk AS2 decoder fails to decrypt when file size exceeds 5MB. After searching the forums,
    I learned that this is a known issue and there is a  hot fix available to fix that issue. I wanted to replicate the same issue in my test environment so that i can apply the hot fix in that environment and make sure nothing breaks. I replicated the AS2
    setup in 2 biztalk test machines . I used one machine as partner A and the other as partner B, then transmitted AS2 files from partner A to partner B. I sent  files with sizes 2MB, 5MB, 15MB, and 50MB, but partner B received all the decrypted files successfully.
    Production servers and test servers have biztalk 2010 installed.
    In conclusion, the decryption issue is occurring in production machine only, and I am unable to replicate that issue in our test servers. I am scared to apply the hot fix or CU5 directly in production. Please advise if there is something else i am missing. 
    Thank you.
    Error message:
    Error details: An output message of the component "Microsoft.BizTalk.EdiInt.PipelineComponents" in receive pipeline "Microsoft.BizTalk.EdiInt.DefaultPipelines.AS2Receive, Microsoft.BizTalk.Edi.EdiIntPipelines, Version=3.0.1.0, Culture=neutral,
    PublicKeyToken=31bf3856ad364e35" is suspended due to the following error:
    An error occurred when decrypting an AS2 message..
    The sequence number of the suspended message is 2
    Hot fixes to fix the issue:
    http://support.microsoft.com/kb/2480994/en-us
    For some people CU5 fixed the issue.
    Dilip Bandi

    First, make sure CU5 wasn't unintentionally applied by Windows Update to your test config.
    Second, either way, a valid strategy would be to apply CU5 as a normal patch, meaning DEV->TEST->UAT->PROD (or whatever your promotion path is).  That way, you'll test for any breaking changes anyway and if the AS/2 issues isn't fixed, well,
    you really no worse off.

  • Mp3 playback bug after update (size greater than 8 MB & Bitrate 192...)

    okay, after the upgrade to the itunes version 7.4 something strange happened. I couldn't play some files any more. After some testing i figured out that the affected files had a pattern - namely:
    size > 8MB &
    Bitrate > 192 &
    No Album Picture
    This isn't a quicktime bug - quicktime can play these songs fine. I think that there is some error with the "pre" loading of the album artwork for files of that kind. I'd classify this as a serious bug because one of my favorite albums (nick cave) is affected from this bug
    regards,
    Georg

    I've solved it.
    The problem is that itunes can't read certain mp3 tag informations after the update (7.4) any more (although mostof these tags were creted with former itunes versions).
    What I did was to remove all mp3 tags in my library with a tool called mp3tag (http://www.mp3tag.de). Afterwards I let itunes recreate the mp3tags. It looks as if the metadata is stored twice: once in the mp3tag and in the database.
    anyhow. Itunes recreates the correct mp3tag out of its database if the tag was deleted from the file.
    that did the trick -without having to recreate the database.
    PS: Steve should send me a free mp3 player, because it took me 2 days to fix this problem.
    PPS: Use a script like the following to find all dead track before doing as said before:
    [snip]
    /* Rename me to FindDeadTracks.js
    Double Click in Explorer to run
    Script by Otto - http://ottodestruct.com */
    var ITTrackKindFile = 1;
    var iTunesApp = WScript.CreateObject("iTunes.Application");
    var deletedTracks = 0;
    var mainLibrary = iTunesApp.LibraryPlaylist;
    var tracks = mainLibrary.Tracks;
    var numTracks = tracks.Count;
    var i;
    var fso, tf;
    fso = new ActiveXObject("Scripting.FileSystemObject");
    tf = fso.CreateTextFile("Dead Tracks.txt", true);
    while (numTracks != 0)
    var currTrack = tracks.Item(numTracks);
    // is this a file track?
    if (currTrack.Kind == ITTrackKindFile)
    // yes, does it have an empty location?
    if (currTrack.Location == "")
    // write info about the track to a file
    tf.WriteLine(currTrack.Artist + "," + currTrack.Album + "," + currTrack.Name);
    deletedTracks++;
    numTracks--;
    if (deletedTracks > 0)
    if (deletedTracks == 1)
    WScript.Echo("Found 1 dead track.");
    else
    WScript.Echo("Found " + deletedTracks + " dead tracks.");
    else
    WScript.Echo("No dead tracks were found.");
    tf.Close();
    [snip]

  • Index size increases than table size

    Hi All,
    Let me know what are the possible reasons for index size greater than the table size and in some cases index size smaller than table size . ASAP
    Thanks in advance
    sherief

    hi,
    The size of a index depends how inserts and deletes occur.
    With sequential indexes, when records are deleted randomly the space will not be reused as all inserts are in the leading leaf block.
    When all the records in a leaf blocks have been deleted then leaf block is freed (put on index freelist) for reuse reducing the overall percentage of free space.
    This means that if you are deleting aged sequence records at the same rate as you are inserting, then the number of leaf blocks will stay approx constant with a constant low percentage of free space. In this case it is most probably hardly ever worth rebuilding the index.
    With records being deleted randomly then, the inefficiency of the index depends on how the index is used.
    If numerous full index (or range) scans are being done then it should be re-built to reduce the leaf blocks read. This should be done before it significantly affects the performance of the system.
    If index access’s are being done then it only needs to be rebuilt to stop the branch depth increasing or to recover the unused space
    here is a exemple how index size can become larger than table size:
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
    Connected as admin
    SQL> create table rich as select rownum c1,'Verde' c2 from all_objects;
    Table created
    SQL> create index rich_i on rich(c1);
    Index created
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1179648 144 9
    INDEX 1179648 144 9
    SQL> delete from rich where mod(c1,2)=0;
    29475 rows deleted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1179648 144 9
    INDEX 1179648 144 9
    SQL> insert into rich select rownum+100000, 'qq' from all_objects;
    58952 rows inserted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1703936 208 13
    INDEX 2097152 256 16
    SQL> insert into rich select rownum+200000, 'aa' from all_objects;
    58952 rows inserted
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 2752512 336 21
    INDEX 3014656 368 23
    SQL> delete from rich where mod(c1,2)=0;
    58952 rows deleted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 2752512 336 21
    INDEX 3014656 368 23
    SQL> insert into rich select rownum+300000, 'hh' from all_objects;
    58952 rows inserted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 3014656 368 23
    INDEX 4063232 496 31
    SQL> alter index rich_i rebuild;
    Index altered
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 3014656 368 23
    INDEX 2752512 336 21
    SQL>

  • HTTP 414 Status code for POST messages greater than 4096 bytes.

    Hello,
    I am using Sun One 6.0 sp2 and Weblogic 6.1 sp3 for my application. All the requests
    are being sent to the Weblogic server using the NSAPI plug-in.
    For all POST messages with size greater than 4096 bytes, I am getting a HTTP Status
    Code 414.
    I have set the MaxPOSTSize to 10240 both on the Weblogic server side, but it still
    gives same error.
    Can someone please guide me as to how to enable processing of POST messages greater
    than 4096 bytes ?
    Thank You.
    Sanjay.

    Hi, I am trying to PUT to update contact info and I get following error:
    2015-01-16 11:00:17,970 INFO [main] oracle.eloqua.connector.eloqua.EloquaConnector.putWithBasicAuth(97) | accessHttpsPut.url=https://secure.eloqua.com/API/REST/2.0//data/contact/7606838, text={"id":"7606838","accountName":"openIdStr001","emailAddress":"[email protected]","type":"Contact"}
    2015-01-16 11:00:18,931 ERROR [main] oracle.eloqua.connector.eloqua.EloquaConnector.putWithBasicAuth(140) | ClientProtocolException
    org.apache.http.client.HttpResponseException: Request is malformed.
    Is there any idea?
    Thanks so much.
    Sincerely.

  • Problem in uploading PDF file greater than 450Kb in portals

    Hi All,
            There is aproblem when we upload a PDF file of size greater than 450 KB in portals,it shows some error.
    Do i need to change any server settings for this or any other configuration is required.
    Help will be appreciated
    Thanks in Advance

    Hi Amit,
    Pls check the thread below:
    /message/1634863#1634863 [original link is broken]
    Hope this helps.
    Regards,
    venkat

  • Unable to process payload greater than 20MB

    We have a SOA system,in which a third party system hits the end point URI exposed by proxy service in OSB. Initially it holds good for 10MB of payload size.
    After a while we need to process payloads greater than 10MB. In such situation we did some configuration changes on server side as mentioned below
    servers-->soa_server1-->protocols-->maximum Message Size = 20000000(20MB)
    servers-->osb_server1-->protocols-->maximum Message Size = 20000000. (20MB)
    It works well and good for payload size upto 20MB.
    Now we are in a situation to process payload size greater than 20MB. For eg. some 25MB we again changed the above parameters values
    servers-->soa_server1-->protocols-->maximum Message Size = 30000000(30MB)
    servers-->osb_server1-->protocols-->maximum Message Size = 30000000. (30MB)
    When we hit the end point URI with third party system using payload size greater than 20MB, it throws java.net.SocketTimeoutException:Read Time out
    Kindly suggest
    Regards
    Ganesh S

    You have to:
    1. create 3 GPOs, assign every GPO one of the PCS MSI's
    2. Assign the GPO's to appropriate OU(s) using Group Policy Management snap-in
    3. Clicking on the OU name in Group Policy Management snap-in reveals list of assigned GPOs to the OU where the list shows the sequence of applied GPOs. The higher number, the earlier execution of respective GPO. You can manage the order using arrow icons on the left of the right pane.
    Besides, GPOs does not appear in the Add/remove control panel, it is the MSI uninstaller information that is being shown. If everything is installed well, it should appear in the add/remove window. De-installation is possible to commmit using GPO removal from OU (preferred way) OR manual de-installation at local computer (in case where GPO was configured not to remove SW package from computers upon GPO removal from OU)

  • Email attachment rows greater than 255 char get truncated

    Hi
    I am trying to code in 4.6C to email excel attachment with rows greater than 255, and the rows are being truncated using function module SO_NEW_DOCUMENT_ATT_SEND_API1.
    I have searched the forum and can not find an actual solution to this on a 4.6c system.
    Firstly is it possible to send an excel attachment with rows that have rows greater than 255, and secondly if yes have anyone seen any sample code which does this?
    Many thanks
    Daniel

    Hai! Check this coding out
    Here the internal table l_tab_attach is temporary table which has a line size more than 255 chars.
    Data: begin of l_tab_attach occurs 0,
            line(300),  "give what ever char length u want as a output
            end of l_tab_attach
    *concatenate all the header column and the its corresponding entries ie rows in this table l_tab_attach.
    peform send_email table t_message
                                        l_tab_attach
    'FORM send_file_as_email_attachment TABLES pit_message
                                              pit_attach
                                        USING p_email
                                              p_mtitle
                                              p_format
                                              p_filename
                                              p_attdescription
                                              p_sender_address
                                              p_sender_addres_type
                                     CHANGING p_error
                                              p_reciever.
      DATA: ld_error    TYPE sy-subrc,
            ld_reciever TYPE sy-subrc,
            ld_mtitle LIKE sodocchgi1-obj_descr,
            ld_email LIKE  somlreci1-receiver,
            ld_format TYPE  so_obj_tp ,
            ld_attdescription TYPE  so_obj_nam ,
            ld_attfilename TYPE  so_obj_des ,
            ld_sender_address LIKE  soextreci1-receiver,
            ld_sender_address_type LIKE  soextreci1-adr_typ,
            ld_receiver LIKE  sy-subrc,
            w_new_obj_id TYPE sofolenti1-object_id,
            t_objhead TYPE STANDARD TABLE OF solisti1 WITH HEADER LINE.
      ld_mtitle              = eml_subj.
      ld_format              = 'XLS'.
      ld_attdescription      = p_attdescription.
      ld_attfilename         = att_nam.
      ld_sender_address      = p_sender_address.
      ld_sender_address_type = p_sender_addres_type.
    Fill the document data.
      w_doc_data-doc_size = 1.
    Populate the subject/generic message attributes
      w_doc_data-obj_langu = sy-langu.
      w_doc_data-obj_name  = 'SAPRPT'.
      w_doc_data-obj_descr = ld_mtitle .
      w_doc_data-sensitivty = 'F'.
    Fill the document data and get size of attachment
      CLEAR w_doc_data.
      READ TABLE l_tab_attach INDEX w_cnt.
      w_doc_data-doc_size =
         <b>( w_cnt - 1 ) * 255 + STRLEN( L_TAB_ATTACH-LINE ).</b>"this is important for lenght exceeding more that 255 char
      w_doc_data-obj_langu  = sy-langu.
      w_doc_data-obj_name   = 'SAPRPT'.
      w_doc_data-obj_descr  = ld_mtitle.
      w_doc_data-sensitivty = 'F'.
      CLEAR t_attachment.
      REFRESH t_attachment.
      t_attachment[] = pit_attach[].
    Describe the body of the message
      CLEAR t_packing_list.
      REFRESH t_packing_list.
      t_packing_list-transf_bin = space.
      t_packing_list-head_start = 1.
      t_packing_list-head_num = 0.
      t_packing_list-body_start = 1.
      DESCRIBE TABLE l_tab_message LINES t_packing_list-body_num.
      t_packing_list-doc_type = 'RAW'.
      APPEND t_packing_list.
    Create attachment notification
      t_packing_list-transf_bin = 'X'.
      t_packing_list-head_start = 1.
      t_packing_list-head_num   = 1.
      t_packing_list-body_start = 1.
      DESCRIBE TABLE t_attachment LINES t_packing_list-body_num.
      t_packing_list-doc_type   =  ld_format.
      t_packing_list-obj_descr  =  ld_attdescription.
      t_packing_list-obj_name   =  ld_attfilename.
      t_packing_list-doc_size   =  t_packing_list-body_num * 255.
      APPEND t_packing_list.
      REFRESH t_receivers.
      LOOP AT mailto.
    Add the recipients email address
        CLEAR t_receivers.
        t_receivers-receiver = mailto+3(48).
        t_receivers-rec_type = 'U'.
        t_receivers-com_type = 'INT'.
        t_receivers-notif_del = 'X'.
        t_receivers-notif_ndel = 'X'.
        APPEND t_receivers.
      ENDLOOP.
      CLEAR t_objhead.
      REFRESH t_objhead.
      t_objhead = att_nam.
      APPEND t_objhead.
      CALL FUNCTION 'SO_DOCUMENT_SEND_API1'
        EXPORTING
          document_data              = w_doc_data
          put_in_outbox              = 'X'
          sender_address             = ld_sender_address
          sender_address_type        = ld_sender_address_type
          commit_work                = 'X'
        IMPORTING
          sent_to_all                = w_sent_all
        TABLES
          packing_list               = t_packing_list
          contents_bin               = t_attachment
          contents_txt               = l_tab_message
          receivers                  = t_receivers
          object_header              = t_objhead
        EXCEPTIONS
          too_many_receivers         = 1
          document_not_sent          = 2
          document_type_not_exist    = 3
          operation_no_authorization = 4
          parameter_error            = 5
          x_error                    = 6
          enqueue_error              = 7
          OTHERS                     = 8.
    Populate zerror return code
      ld_error = sy-subrc.
    Populate zreceiver return code
      LOOP AT t_receivers.
        ld_receiver = t_receivers-retrn_code.
      ENDLOOP.
    ENDFORM.                    " SEND_FILE_AS_EMAIL_ATTACHMENT
    <b>Dont forget to give points if useful</b>

  • How do I search for files greater than 500M in size within a directory?

    I would like to know how to recursively search through a directory and it's subdirectories for files greater than 500M. What is the command for this?
    Thanks!

    Oh my, it's too early...
    You want >500M files, here you go...
    find /path/to/dir -type f -size +524288000c
    **BLUSH**
    To add something useful here, in ksh you can type
    find /path/to/dir -type f -size +$((500*1024*1024))c

Maybe you are looking for