Service design for handling large datasets

As an overnight process we need to invoke 2 services against every record in our database (over 1 million records). Specifically, the process flow should be as follows:
- For each record in the database invoke service A.
- For each record use the return value from service A as a parameter to invoke service B.
If we were to process each record one at a time in a synchronous fashion, the time needed for processing all records would be too great. I was wondering if there is a better way to implement this? I have considered batching and making asynchronous calls
using a duplex but am unclear about which option would be superior.

Datasets with datatables, the salad bowl,  are two slow for Service Oriented Achitecture.
http://www.hanselman.com/blog/ReturningDataSetsFromWebServicesIsTheSpawnOfSatanAndRepresentsAllThatIsTrulyEvilInTheWorld.aspx
Datatables use boxing and unboxing, which makes it slow..
http://www.csharphelp.com/2010/02/c-best-practices-to-write-high-performance-code/
You should be using DTO(s) and a List of DTO(s)
http://lauteikkehn.blogspot.com/2012/03/datatable-vs-list.html
http://en.wikipedia.org/wiki/Data_transfer_object
http://www.mindscapehq.com/documentation/lightspeed/Building-Distributed-Applications-/Building-WCF-Services-using-Data-Transfer-Objects
On the other hand and if using SQL Server,  you may want to look into MS SQL Server Service Broker too.
https://technet.microsoft.com/en-us/library/ms166104(v=sql.105).aspx

Similar Messages

  • Best practices for handling large messages in JCAPS 5.1.3?

    Hi all,
    We have ran into problems while processing larges messages in JCAPS 5.1.3. Or, they are not that large really. Only 10-20 MB.
    Our setup looks like this:
    We retrieve flat file messages with from an FTP server. They are put onto a JMS queue and are then converted to and from different XML formats in several steps using a couple of jcds with JMS queues between them.
    It seems that we can handle one message at a time but as soon as we get two of these messages simultaneously the logicalhost freezes and crashes in one of the conversion steps without any error message reported in the logicalhost log. We can't relate the crashes to a specific jcd and it seems that the memory consumption increases A LOT for the logicalhost-process while handling the messages. After restart of the server the message that are in the queues are usually converted ok. Sometimes we have however seen that some message seems to disappear. Scary stuff!
    I have heard of two possible solutions to handle large messages in JCAPS so far; Splitting them into smaller chunks or streaming them. These solutions are however not an option in our setup.
    We have manipulated the JVM memory settings without any improvements and we have discussed the issue with Sun's support but they have not been able to help us yet.
    My questions:
    * Any ideas how to handle large messages most efficiently?
    * Any ideas why the crashes occur without error messages in the logs or nothing?
    * Any ideas why messages sometimes disappear?
    * Any other suggestions?
    Thanks
    /Alex

    * Any ideas how to handle large messages most efficiently? --
    Strictly If you want to send entire file content in JMS message then i don't have answer for this question.
    Generally we use following process
    After reading the file from FTP location, we just archive in local directory and send a JMS message to queue
    which contains file name and file location. Most of places we never send file content in JMS message.
    * Any ideas why the crashes occur without error messages in the logs or nothing?
    Whenever JMSIQ manager memory size is more lgocialhosts stop processing. I will not say it is down. They
    stop processing or processing might take lot of time
    * Any ideas why messages sometimes disappear?
    Unless persistent is enabled i believe there are high chances of loosing a message when logicalhosts
    goes down. This is not the case always but we have faced similar issue when IQ manager was flooded with lot
    of messages.
    * Any other suggestions
    If file size is more then better to stream the file to local directory from FTP location and send only the file
    location in JMS message.
    Hope it would help.

  • Changing web-services.xml for handler

    I have been using the servicegen task to generate the .ear file for my webservice.
    Among other things, it took care of generating the web-services.xml file for me.
    I have a need to write a handler and thus it requires changes in the web-services.xml.
    I generated the web-services.xml file the first time and made the handler related
    changes and it's working fine.
    But now, I loose the auto-generation facility (since I have hand-edited it for
    handler changes). Any time, I change the interface of the webservice, I would
    have to manually change the web-services.xml file to reflect the new interface.
    Is there a better way of doing this so that I can auto-generate the webservices.xml
    file plus make my handler changes. Anything in the ant to do it smartly, rather
    than doing it manually. I would imagine that anyone who writes a handler would
    run into similar situation.
    thanks for help.
    John

    Here is an example of ejb and source2wsdd.
    "manoj cheenath" <[email protected]> wrote in message
    news:[email protected]...
    It works with EJBs too. You should use ejbLink attribute
    to point to the ejb link. Something like:
    <source2wsdd
    javaSource="${sourcecode.for.the.ejb.interface}"
    ddFile="${webss.output.dir}/WEB-INF/web-services.xml"
    typesInfo="${webss.output.dir}/WEB-INF/classes/types.xml"
    serviceURI="${webss.service.url}"
    ejbLink="${webss.ejb.link}" >
    "John" <[email protected]> wrote in message
    news:[email protected]...
    Thanks for the response. But the source2wsdd task only works with javacomponents
    whereas we have ejbs. Any other clues/suggestions.
    Thks,
    - John.
    "manoj cheenath" <[email protected]> wrote:
    Here you go.
    regards,
    -manoj
    "John" <[email protected]> wrote in message
    news:[email protected]...
    Thanks for the response. I'll appreciate if you could share the
    workaround
    in 7.0.
    I'm okay with it even if it's unofficial and might go away.
    thks,
    - John.
    "manoj cheenath" <[email protected]> wrote:
    Hi John,
    This is a know problem with WLS 7.0 and WLS 8.1. The
    DD file (web-services.xml) is much more expressive than
    the servicegen ant task. So one needs to modify the DD
    to use some features. But, if one modifies the DD then it
    is difficult to use the ant tasks again for iterative developemnt.
    JSR 181 [1] and JSR 175 [2] tries to address this
    problem by providing metadata (markup) in source code.
    Unfortunately these JSRs are in the early stages and
    will be completed around JDK 1.5 timeframe. There
    is an internal implementation of a similar beast in WLS
    since 7.0 SP2, but it is not officially supported or
    documented. Mainly because the above said JSRs are
    supposed to address iterative development problem
    in a standard way.
    So, if you cannot wait for the JSRs and dont mind using
    a non standard implementation that may not be supported
    or may change in the next major release (~WLS 9.0),
    let me know. I can send you details.
    Regards,
    -manoj
    [1] http://www.jcp.org/en/jsr/detail?id=181
    [2] http://www.jcp.org/en/jsr/detail?id=175
    "John" <[email protected]> wrote in message
    news:[email protected]...
    I have been using the servicegen task to generate the .ear file
    for
    my
    webservice.
    Among other things, it took care of generating the
    web-services.xml
    file
    for me.
    I have a need to write a handler and thus it requires changes in
    the
    web-services.xml.
    I generated the web-services.xml file the first time and made the
    handler
    related
    changes and it's working fine.
    But now, I loose the auto-generation facility (since I have
    hand-edited
    it
    for
    handler changes). Any time, I change the interface of the
    webservice,
    I
    would
    have to manually change the web-services.xml file to reflect the
    new
    interface.
    Is there a better way of doing this so that I can auto-generate
    the
    webservices.xml
    file plus make my handler changes. Anything in the ant to do it
    smartly,
    rather
    than doing it manually. I would imagine that anyone who writes a
    handler
    would
    run into similar situation.
    thanks for help.
    John
    begin 666 handler.zip
    M4$L#!!0`" `(`#V&D2X````````````````)``0`345402U)3D8O_LH```,`
    M4$L'" `````"`````````%!+`P04``@`" `]AI$N````````````````% ``
    M`$U%5$$M24Y&+TU!3DE&15-4+DU&\TW,RTQ++2[1#4LM*L[,S[-2,-0SX.5R
    M+DI-+$E-T76J! D8ZQG&&Y@I: 27YBGX9B87Y1=7%I>DYA8K>.8EZVGR<O%R
    M`0!02P<(<UT-_$<```!'````4$L#!!0`" `(`/V"D2X````````````````4
    M````=VQS-S O<')O<&5R=&EE<RYT>'2%4$UKPS ,O0?R'P3==5-W&@1Z&AD4
    M!BL[]1;<1&T]',O(RMHR]M]G)RNL'6PZV4_O0]+L]L\JBQF,%80#B5J*"?I/
    MDRA*?8#."K7*<@)EX$'#H+ C3V*4.MA:E]TRLTG,!6H?<!2;$)QMC5KV,9L`
    M;T'W!)'DG>3NDC%JNPHW9'!(C":-^I9B(^J0LJUQ^-.O+*ZU8^[!'2*V[+=V
    MA_VIX]Y8?Z5+L1L3"09QOP::'DUJ+?:JH4)TW!JWYZC5PWQ^/^V5L6QVEF^$
    M#TD)=*2R^/XL'BM<">_$]/"4+X1+KR2>%.IC<"PDN*S7J^>7U_JN7M?9>#KM
    MN,S-Q_F>GSC!9=$Z2UZ;UID8*5Y0+EME\0502P<(3D:RZ!$!```/`@``4$L#
    M! H```````V&D2X````````````````.````=VQS-S O<V%M<&QE-"]02P,$
    M% `(``@`^X21+@```````````````!T```!W;',W,"]S86UP;&4T+V%P<&QI
    M8V%T:6]N+GAM;%6/36_", R&[TC\!Z^77M9X5#M,4P!MT&E,^T "A'9"(;6V
    MH#2IDI2/?[^*`BLW^_7[O+;Y<%]HV)+SRII^U&-W$0`9:7-E?OK18OZ2/$3#
    M0;?3[?";\==H_CW-0)2E5E*$&H'IXOE],H(X09Q5!CZ4=-8??*#"W\+$2(8X
    MGH_A+<TR>&IQ/98B9I\QQ+\AE(^(&[$5S%>&25O@)B7"/.0>6[M6O57*:C%N
    MSFE-:@& Y\J76AP2(PH:O)+6=FF=SF?DMDH2QZMY0Y"73I7'B",`2V=U#DM:
    MPS_6,AVIPN:5;A+J;D?K4]DT2>74P(NBU'3/=L)Q/(L7E[0FT#XDSMIPMG*\
    M4D_A>$[G>-G*\?KS/U!+!PBT=FQ0$0$``,,!``!02P,$% `(``@`RX61+@``
    M`````````````!<```!W;',W,"]S86UP;&4T+V)U:6QD+GAM;*5726_;.!2^
    M%^A_8(6>"DL"!CW:';33%I-!$Q1-!SD&M/1L,Z5%@:3B&(7_>Q\WB5J\-/4A
    M",FWO^\MFM=2/$"A246WL$@4W=8<WB:DA!5MN%XDRX;Q,GGW\L7+%X3,7Z4I
    M*<1V*RJ"C#5(S4"1-'UG7_W5GJP81V%9EG=$F7[2"<EC09$$54/!5JP@6A"]
    M87@!\I$5,!;M[73/"7FDO,'SO\"YN!.2E[?A)3_%>%_3X@===P)THX5DE&=M
    M!$[S[ZAL>7LLQYA WA><*@6JY7O]4S2Z;O1]R>0AO_OT(;VZ^9RW5)TTJC>$
    ME8NDX PJG5F*S-PFUD9/`ARV^$S,_T:XHPY:#\&C(]2Q*:]_>C</J5?Y8-P]
    MR?] 'VED6:MNGINC=<4>-95K"'!SX$*PU5"5RC@(M)HIT$T]0YC5"*/,!6^V
    M@Z4W*K-<,V?+J1^*Y6+?"G*N1''MF6)5&U-4(5FMF:@628E.:B!KJ$!2#251
    MNEFM$A+BX-\Q9H,(1K'JTQS)R3QWMDQ:9L,QL*R0@ 81I]((QQ)&_!K8!,7;
    M'W@=]/81&-O7(WN.>?T\)<3)[5OK2(B72RJ $J.)301Q1!PC4:S$<B1>!VHQ
    MB"J(DH4U+DL(JPK>H.!%\B8SCPF!IW#C+,_?)"TL\%H?\SXXGUL=YWT<@B^9
    M<M*E!$G;YA7>A9R1G2JY[8NS'DD`9><VQ4ZD]S60X(FQ\1^,H*B04@5_HB9V
    MR+J";?TW[G\<`7/<90*]EW5C_1VK")T&#;2LOJA#J?5:`A<%=3&YH,,]2P)G
    MRQ/<YUJ22_W0C7D>(A]R,5>BD07\A:DK0YPN3SFA5>FS+L66.%D6`2%'QKY;
    M>QU%_."A'?25G^TLG8X#:DX]G\J>MCQ(-EZHJVHESJ0@MX0QIY?V_[>K19*/
    M8;5!ISA()WICAFZZ,U,W]0\IPQ<K+]38)%[^&"Y_A);?!<L(*_,\0D8WVB[I
    MDJ'>HS*]M$\Z5M\G6XRZ6QQ1IQ!*)WJ.@Z41;$ :$NS>_J.CQC&]$W@N</2T
    MKCESN1@PF7WID$''@>>;>./,=MW;F5848AAA_>MO<30*O@JI#?5[=0UZ(TK'
    MJ64#W7YP>@2%@>/+-9HVHR$ZT3J)A-7T1M=-YG8XG0676W1&RXNY;).^W./B
    M7N]9M3:Q-WOV^07*_J*DMHO&/NG-:8FM3(8^U<NX79.75,%X%/4"?<DFU=M4
    M8FA>?_K^WA1Z3"L>P7^%3%FDQ3%[7)HP4)X[\MZUM2G6@073<;FD.+J<G(O9
    M23PLI=@I&.#!71*-0,4*6V-X<$;%WUIQ3N$)"F+^-)HNG?V.7_;PC$H)9Q5$
    M2U8C^="U>(Y$\#;BSZ]>LAENY7@SL3:98G&-LSK=!;)KRJKG%N5S?/[[[O;C
    MEV%=CQV?Y_Y3'$^_`%!+!PCTFH+3( 0``)8/``!02P,$"@``````$8:1+@``
    M`````````````!4```!W;',W,"]S86UP;&4T+V-L:65N="]02P,$% `(``@`
    M[X*1+@```````````````"<```!W;',W,"]S86UP;&4T+V-L:65N="]#;&EE
    M;G1(86YD;&5R+FIA=F'M55%/VS 0?D?B/YSZE*C,8M+>NCY A0;3QAA%TZ1I
    M#ZY[:;TY=F8[4(3ZWV?'3NN4%/JPQZ$JQ+[OOCO?=^=4E/VF"P1;6Z4Y%<30
    MLA+XCC#!4=K1\='Q$2\KI2W\HO>4U)8+<F514X??M:[(JA1$TA)-11F2K]?N
    MM1^D*T8^GGV_O9E<K!A6EBLYV@-;4CD7J,D'E*@YNPS+U] 1=B4+]1KT,QKC
    M:C!1TN+*OH8VBE9D^N7LYIE;CV,#CE7H-WJF<S5_?!%P(; ,:KR >:F.NTGO
    MQ=SMJ4 XB)H'.:MZ)C@#)J@Q,&DZ)=8;G#_*N8&N6D_>"Z#2_)Y:A$0<8$H6
    M?#&*@$!\K_@<N.0VZX%"_N2A`';)#8E[XRT/P+I#UG3ACY^P0'N)=([:9"V#
    M1EMK&5U)"NCPQ*R;K(1:-(I+:S)(M $,_T]<WA8$WJ/8).H3:.W-^QAD+<0H
    M6+N&N/+)7&P-,2& Z:.Q6!)56^+2DE;(S!_LDX^7-5%S&,)@.'#/E#B/!09H
    MAQ?8DHNY1MD-.O&[,7*H0W![</L(6>ODQL%<.ZVS/(=X2G=.UQ\@_6,,F5_D
    M&[ALL!LV`%YD`<JEL50R5$6GG/F&%3K[#:/G3S9S&1JSQ:<:-?B3*,@0WN8;
    MW!J%P22*;_TMO5_M\AY8_#>^^ V1+^@W*FJG'^3]1(6HS3)+L@HOZY[^<W6R
    MKI^GUL5>; /W-%R G-=%X2;2S'R_X4-G-]&B4#IP\/'I"/C[ALJ]#(>)"&9&
    M:%6YR<Y@`.[7)KQN6>(H.9Q5(=#.$(5AG"DED$H(M^DM_JG1V*Q[E4+)-@?I
    MJ?B@XPL3*@3.@:1_@Z3?K7[<G.+YM0W,KMINZEKRDHUZ_-Q!0]RQ=_4"1T.B
    M87NGN].ZQ[AU\6!ONJ':S4(SX=*56E485MXE8>G<-)[I!$ZW=6?4LF6\@=J;
    M'S!1#$/%IM9]X>^T^QPGU':IU4/3$SO?X SW"6MUC0?H:2HE#6;P3%$X2-+@
    M?HBF\%_4?RNJ^_T%4$L'"&S_=^_[`@``#0H``%!+`P04``@`" #O@I$N````
    M````````````'@```'=L<S<P+W-A;7!L930O8VQI96YT+TUA:6XN:F%V8;54
    M2VO<,!"^!_(?!I]LNFA;Z"G+'D)(R4+21Y;00PA%D6=MM6/)2/(^*/GOE>7W
    M9DM[B3&6F/DT\VGF&Y=<_.(9@JN<-I(3L[PH"3\R01*56YR?G9_)HM3&P4^^
    MY:QRDMBMM+7GE?W2&'YHG5.WU&SUY7HOL'12JV/WGNT+8HH7:$LND'W[[+>G
    M0:84;(UF*P7^(UR-S+E*"0V[:=:5VNC%_T'O,?/W,(<0NZR>20H0Q*V%.R[5
    M[]H*T-JMX\XO6RU3*+PWAK4S4F6/3\!-9B%IX0!R$P<3(U29RV&YA _!#>%1
    MN OA&]#C^R=(%HWO!<EBCW.YT;N 7A%AQNG29%7AV]67)(;HX?X6],9C$6Q3
    M,%#:@2U1R(W$-!J"U\O+Y$IWHVO SJ;T8*@GZLO24[E!(OU=&TK;KO3)EH'@
    M*_^/E==7/,1<_#72U[I)H5/++BC+T)W$Q<FBJS% D$\X&38-D6#T9<F=*R_F
    M\T[O\U;O\V@&W7F Z&22:)SD2"=@NLV4[!0UH5D/"K2R"_N&:#]$\5";$8KQ
    M-(V;T@ZBCN$JC&MK8D&I,U 54?.%!,:I.[+,]A2O\M#RKFJS";.!R?I@'19,
    M5XZ57AR.VC,L[RL6)X.R!'<BCV$T^X CP9^*%GWBDC %IT$8Y YAA\^]JIK?
    MTD4$[^I 1VF.?PQODJL=%O_^`5!+!PC/*:Y>! (``#P%``!02P,$% `(``@`
    M[X*1+@```````````````"H```!W;',W,"]S86UP;&4T+VAE;&QO+7=O<FQD
    M+6AA;F1L97(M:6YF;RYX;6QMD+$.@C 0AG<3W^'2O; X5A<7=@?GDUZDR;60
    M'I#X]H(%4HC=^OWW];_4-!@L4]1U@R[([7P",#L&`3U=547,[;.-;*N4WN=0
    M_83YK!+4C"(Z2?W0M]$A%X*^8[H4#XHCQ>4%!9L]^2ZX7G<8T4N&=\&RRKJ>
    M)Q%\DX)\'&!$'J8IT!K06K+P^L"AM\R+RW_-IEQ:$MFNZ4\F>""S^0502P<(
    M&/JK'I\```!,`0``4$L#!!0`" `(`.^"D2X````````````````D````=VQS
    M-S O<V%M<&QE-"](96QL;U=O<FQD4V5R=FEC92YJ879A18TQRP(Q#(;W0O]#
    M[*0'VN6;%$%P<7)Q<(XUW!5C6]*>-XC_7>OY*00">=Z\3T)WP9:@]"6*1UYD
    MO":FOY566MFFT0H:V P\Y.5 ITQR\^Z51FFI[/%*.:&CM>E*24MK_UOLI\6:
    M^F^U2OV)O0/'F#/LB#D>H_#Y,/;=JPQ@U,%7&!,)%A\#=!C.3#)W'?JP-K^"
    MW0BV]6[&9UO7QW<HXD,+W3<_G=W?*1 JO00P;S2!H4*SJNRAU6N>4$L'"/O]
    MA9:Z````& $``%!+`P04``@`" `,AI$N````````````````&0```'=L<S<P
    M+W-A;7!L930O<F5A9&UE+FAT;6R%5DMOVS ,O@_H?R!\7F)LQ2Z#:Z"/%=VP
    M=L4RH+TJ-E.KE25/DI,8V(\?];!C+VYW262:Y/?Q)3JK;"WRDW<G[P"R"EF9
    MNQ.=+;<"\V_GCXN?]Y=PPV0I4)LL#?*H)+A\`8WB+#&V$V@J1)N [1H\2RSN
    M;5H8DT"-)6>D4FA$F013@$KCYBQ9+M-2%:EMK=*<B:4W\.ZS--+Q#VM5=CUJ
    M]6&&%PF#*L"OBAO /:L;@6 JM3- /V 5M 8'$]AQ6\$.UV!0;WF!R\.K@LG@
    M2A.%E.A;L!7"ZL?Y/8E^MVALJM$T2I+#&HUA3V0^@[M6M@J>"L%16C"\1" 8
    M#XHZ/%<1=QE48YC-$-#*8F. RZT26RSI`#O-+9=/Q.H`^;E7SY3(^RQ3B?('
    M4D98L^(%"?B9;1FQ8<; 1FD?UR@)L(#!E,4:W: 0ZD%I4:YBIIR/))^79RG+
    MLY1@>SIC$JOCJ.<0@UHL1T0[EKV)%!4I61L%C[??8</%;'25BV*Q<V$L(J6%
    M,UKN:Y'D;[U]$Y]1L=<M%V4`IN8K5-VXHZM^>.-R/S3?$3&O$U@,1P<Y*([*
    M') 'B/G*>.3C-,+!";%4K6U:FSY\N5A\O;M.?:.@@1CG!$VW$A@-KIMW9_J$
    M$C6CV%U<A2JQ.+285#(&3=WK#,QK'HUJ=8$?=Z8L)TZI21<Q6<:E`DILA.IJ
    M-U4NQ7/^*!*/?G45JD#8_\0')==8T.73O<8G#"[1``K!.4.FAYI.0PXC_LST
    MG"\20]M$_ ,L, -L\#EGV!=U!.'GQV5X5+MI\P3%])9Q&>=G.+H>\JWP']M+
    M_S<=PF/9N"-AMDU"G3Q]UC2"%XQ;UPT=1=!TX18;995+RJN3'-Q$,\OIOAUE
    M3FV&"<)IRK-T? 7V(SH>TFM%$[+S]EP:J]LB>!^2$!<3W>*$U<JEVY/N.M#H
    M\]?[(JZ^K=^'::"L6EH.D\D^XC%B&-;N:;_91N#&W^F?4K>!:HSP=[13/?R?
    MUPQ.IP87=.D'`SBV</%Q6>(^*M^H&OMRTD(][?FE<?F2+'PL_ 502P<(9EEH
    M_0\#```U" ``4$L#!!0`" `(`.^"D2X````````````````@````=VQS-S O
    M<V%M<&QE-"]397)V97)(86YD;&5R+FIA=F&-5;%NVS 0W0/D'PZ9*!35U*E&
    MAM1(VQ1HFL9!4:#HP%!GFRU%JN3)<6#DWTN*I"W9<A(-LD6^]^[XCD<V7/SE
    M"P1JR5C)5>EXW2A\-SD].3V1=6,LP1^^XF5+4I57A)9[X/[LNES7JM2\1M=P
    M@>7W:_]W'&0;47ZY^'E[,[U<"VQ(&CTY`EMR72FTY2?4:*7X'#]?0B?8E9Z;
    MEZ!?T3F_^*G1A&MZ">T,;\K9MXN;`]H(L0,G%\8G@]('4ST^"[A46*,>36V'
    M><['_:2/8NZ..! 78JI8SJ:]5U* 4-PYF*%=H4U^@^>CKAP,J[4)+(#&RA4G
    MA%YQ0!@]EXM)?WI&5NH%),=W*7>8&'IE9 522V(C8E!L`A2 EM*5:>R\'PD@
    MOH<A/(;%V$7$E@ND)#_M!E@1AE@DA^<L*;RMH\09%%V IT&V72/\^@U!#GF%
    MUK&<HD5JK89>O P8T;DW1B'7*>U;_->B(S;<B%"+K#U[=(1U:5HJO;6:E&8Y
    MX<2%*5<**RC[3U[#-CFR+;XB&]<8[9#!03[PJH0B_9F,4DWM(VQR`0[[$ 2M
    MNSH>S!2UF(SP_"*C%>>!&BJ0)E@Q@(<F]4OVK_-,"> P=<,MQ9UQJ5>H3(/Q
    M*U#V5%(G@\?X(]1WJU?SR(_2.IHNI:I8C#'.LNA:18>4G5J/-^;U;120#MZ?
    MP9ND%U+]P57KL^[QTQROJG FA-9G^^V2P4^"DU@R&!Q#@-NR`V#,84;^GKFS
    M_F[HV4)+:QY XP/L70@,MP%R\8]MR'QP]+S:LZ@_A>FWV&3=?*.!"&B+H2P)
    M%=SI-!([=F:D/?AQ!)9)_HYPU]XL5A2[+1J<`QU>?E>&CV(+UQUVJP8@YRQ"
    MI7;$M4 S'R2^,W2X+SK%O.O38*'C:9WQ^: )T.WPT[B_NE5JYZ]__0=02P<(
    M)86WEK("```@" ``4$L!`A0`% `(``@`/8:1+@`````"``````````D`! ``
    M`````````````````$U%5$$M24Y&+_[*``!02P$"% `4``@`" `]AI$N<UT-
    M_$<```!'````% `````````````````]````345402U)3D8O34%.249%4U0N
    M34902P$"% `4``@`" #]@I$N3D:RZ!$!```/`@``% ````````````````#&
    M````=VQS-S O<')O<&5R=&EE<RYT>'102P$""@`*```````-AI$N````````
    M````````#@`````````````````9`@``=VQS-S O<V%M<&QE-"]02P$"% `4
    M``@`" #[A)$NM'9L4!$!``##`0``'0````````````````!%`@``=VQS-S O
    M<V%M<&QE-"]A<'!L:6-A=&EO;BYX;6Q02P$"% `4``@`" #+A9$N])J"TR $
    M``"6#P``%P````````````````"A`P``=VQS-S O<V%M<&QE-"]B=6EL9"YX
    M;6Q02P$""@`*```````1AI$N````````````````%0`````````````````&
    M" ``=VQS-S O<V%M<&QE-"]C;&EE;G0O4$L!`A0`% `(``@`[X*1+FS_=^_[
    M`@``#0H``"<`````````````````.0@``'=L<S<P+W-A;7!L930O8VQI96YT
    M+T-L:65N=$AA;F1L97(N:F%V85!+`0(4`!0`" `(`.^"D2[/*:Y>! (``#P%
    M```>`````````````````(D+``!W;',W,"]S86UP;&4T+V-L:65N="]-86EN
    M+FIA=F%02P$"% `4``@`" #O@I$N&/JK'I\```!,`0``*@``````````````
    M``#9#0``=VQS-S O<V%M<&QE-"]H96QL;RUW;W)L9"UH86YD;&5R+6EN9F\N
    M>&UL4$L!`A0`% `(``@`[X*1+OO]A9:Z````& $``"0`````````````````
    MT X``'=L<S<P+W-A;7!L930O2&5L;&]7;W)L9%-E<G9I8V4N:F%V85!+`0(4
    M`!0`" `(``R&D2YF66C]#P,``#4(```9`````````````````-P/``!W;',W
    M,"]S86UP;&4T+W)E861M92YH=&UL4$L!`A0`% `(``@`[X*1+B6%MY:R`@``
    M( @``" `````````````````,A,``'=L<S<P+W-A;7!L930O4V5R=F5R2&%N
    ?9&QE<BYJ879A4$L%!@`````-``T`K@,``#(6````````
    `
    end
    [sample10.zip]

  • Exe on Cloud Service crashing for slightly larger data

    Hi,
    I have launched a C++ exe in cloud service which is working fine for smaller data and crashing as we provide slightly larger data which is taking hardly 5 minutes to solve when running locally.
    Can anyone suggest what could be the issue???
    Thank you.

    Hi ,
    It seems that this is the executionTimeout error. Please try to change "executionTimeout" value.
    <system.web>
    <httpRuntime executionTimeout="600" />
    </system.web>
    >>Also, can you explain the parallel approach that we can use to handle data on azure role. I am using a single web role in my application.
    I am not familiar with C++, but I think you could use the concurrent or multiple threads on your projects:
    http://stackoverflow.com/questions/218786/concurrent-programming-c
    And I found some resource about how to deploy exe on azure, please refer to it:
    http://www.codeproject.com/Articles/331425/Running-an-EXE-in-a-WebRole-on-Windows-Azure
    Another way, You also used the startup script to start and run the EXE file, you could see this blog,
    http://blogs.msdn.com/b/mwasham/archive/2011/03/30/migrating-a-windows-service-to-windows-azure.aspx
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Handling large datasets

    Hi gang,
    I have a query which returns a large large result set.My goal is to populate a scrollable JTable with this result set.This result set is so large that the memory can also not handle it.So I am looking for options to save the results of this query.
    I am thinking of writing the results of the query to a CSV file and then read chunks of the CSV file in a vector which is then used to populate the JTable(paging the JTable).
    Does any of you have any experience working with large files and basically the performance of CSV files in reading the chunks.Do you think there is a bottleneck which I am ignoring?
    I'll appreciate any suggestions.
    Thanks
    Connie

    I understand you know how to handle the paging with scrollable jtables. I furthermore think that you know that a JTable is backed by a TableModel which contains the data you want to display.
    You state that the result set is likely to exceed the memory of the client computer. A question may be allowed: Is it reasonable to display the ENTIRE result set in a single table then? Assuming that each row occupies one kB of RAM, 64.000 rows would consume 64 MB of RAM, which modern computers CAN handle. Do you really want to ask users to visually handle 64.000 table rows???
    However. The New I/O introduced with JDK 1.4 might help. Write the entire result set into a file (CSV or binary octet stream), and blend them into the memory using FileChannel.map(mode, start, end) with varying start and end parameters depending on the portion to be displayed.

  • Java proxies for handling large files

    Dear all,
    Kindly let me know handle the same in step by step explanation as i do not know much about java.
    what is advantage of using the java proxies here.Do we implement the split logic in java code for mandling 600mb file?
    please mail me the same to [email protected]

    Hi !!   Srinivas
    Check out this blog....for   Large file handling issue  
    /people/sravya.talanki2/blog/2005/11/29/night-mare-processing-huge-files-in-sap-xi
    This will help you
    Please see the documents below. This might help you
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/a068cf2f-0401-0010-2aa9-f5ae4b2096f9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f272165e-0401-0010-b4a1-e7eb8903501d
    /people/prasad.ulagappan2/blog/2005/06/27/asynchronous-inbound-java-proxy
    /people/rashmi.ramalingam2/blog/2005/06/25/an-illustration-of-java-server-proxy
    We can also find them on your XI/PI server in folders:
    aii_proxy_xirt.jar
    j2eeclusterserver0 inextcom.sap.aii.proxy.xiruntime
    aii_msg_runtime.jar
    j2eeclusterserver0 inextcom.sap.aii.messaging.runtime
    aii_utilxi_misc.jar
    j2eeclusterserver0 inextcom.sap.xi.util.misc
    guidgenerator.jar
    j2eeclusterserver0 inextcom.sap.guid
    Java Proxy
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/a068cf2f-0401-0010-2aa9-f5ae4b2096f9
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f272165e-0401-0010-b4a1-e7eb8903501d
    Pls reward if useful

  • SSAS Tabular : MDX query goes OutOfMemory for a larger dataset

    Hello all,
    I am using SSAS 2012 Tabular to build the cube to support the organizational reporting requirements. Right now the server is Windows 2008 x64 with 16GB of Ram installed. I have the following MDX query. What this query does is get the member caption of the
    “OrderGroupNumber” non-key attribute as a measure where order group numbers pertain to a specific day and which occurs in specific seconds of a day. As I want to find in which second I have order group numbers, I cross the time dimension’s members with a specific
    day and filter the tuples using the transaction count. The transaction count is a non-zero value if an Order Group Number occurs within a specific second of a selected day.
    At present “TransactionsInflight].[OrderGroupNumber].[OrderGroupNumber]” has 170+ million members (Potentially this could grow rapidly) and time dimension has 86400 members.
    WITH
    MEMBER [Measures].[OrderGroupNumber]
    AS IIF([Measures].[Transaction Count] > 0, [TransactionsInflight].[OrderGroupNumber].CURRENTMEMBER.MEMBER_CAPTION,
    NULL)
    SELECT
    NON EMPTY{[TransactionsInflight].[OrderGroupNumber].[OrderGroupNumber].MEMBERS}
    ON COLUMNS,
    {FILTER(([Date].[Calendar Hierarchy].[Date].&[2012-07-05T00:00:00], [Time].[Time].[Time].MEMBERS),
    [Measures].[Transaction Count] > 0) } ON
    ROWS
    FROM [OrgDataCube]
    WHERE [Measures].[OrderGroupNumber]
    After I run this query it reaches to a dead-end and freezes the server (Sometimes SSAS server throws OutOfMemory exception but sometimes it does not). Even though I have 16GB of memory it uses all the memory and doing nothing. I have to do a hard-reset against
    the server to get the server online. Even I limit the time members using the “:” range operator still the machine freeze. I have run out of solutions to fine-tune the design. Could you guys provide me some guidelines to optimize this query? I am willing to
    do a design change if it is necessary.
    Thanks and best regards,
    Chandima

    Hi Greg,
    Finally I found the problem why the query goes out of memory in tabular mode. I guess this information will helpful for others and I am posting my findings.
    Some of the non-key attribute columns in the tabular model tables (mainly the tables which form dimensions) do not contain pretty names. So for the non-key attribute columns which I need to provide pretty names I renamed the columns to something else.
    For an example, in my date dimension there is a non-key attribute named “DateAltKey”. This is the date column which I am using. As this is not pretty to the client tools I renamed this column as “Date” inside the designer (Dimension
    design screen). I deployed the cube, processed the cube and no problem.
    Now here comes the fun part. For every table, inside the Tables node (Tabular SSAS Database > Tables) you can view the partition details. You have single partition per dimension table if you do not create extra partitions. I opened the partitions screen
    and clicked on the “Edit” icon and performed a Syntax Check. Surprisingly it failed. It complains about the renamed column. It complained “Date” cannot be found in source. So I realized that I cannot simply rename the columns like that.
    After that I created calculated columns (with a pretty name) for all the columns which complained and all the source columns to the calculated columns were hid from the client tools. I deployed the cube, processed the cube and performed a
    syntax check. No errors and everything were perfect.
    I ran the query which gave me trouble and guess what... it executed within 5 seconds. My problem is solved. I really do not know who did this improve the performance but the trick worked for me.
    Thanks a lot for your support.
    Chandima

  • Seeking recommendations for handling large binary documents with security(preferable) for inbound and outbound scenarios from OSB- SOA and SOA- OSB

    Hi,
    I am currently working on a project with the following requirements
    1. Client transfers binary document (between 1-20MB in size) from OSB proxy to SOA composite to Content Management system
    2. Client retrieves binary document (between 1-20MB in size) from Content Management system to SOA composite to OSB proxy
    In otherwords, a inbound and outbound integration.
    What I have tried so far and my results:
    Scenario A
    1. Enabled MTOM on SOA composite by attaching wsmtom policy
    2. Created an OSB business service and consumed the SOA composite application
    3. Enabled MTOM on OSB proxy and business service and configured it to pass by reference
    Scenario B
    1. Enabled MTOM and security on SOA composite by attaching wsmtom policy and SAML policy
    2. Created an OSB business service and consumed the SOA composite application
    3. Enabled MTOM on OSB proxy and business service and configured it to pass by reference
    I have a demo integration setup that writes a binary document to a file using the above steps. My SOA composite has a file adapter that writes the binary data to an external file and it is exposed as a web service with a simple WSDL definition that has an inline XSD schema with an single element of base64binary type. I have added a mediator that maps this base64binary element node to the file adapter's input node.
    Result for Scenario A with file size less than 1 MB:
    Flawless execution with sub-second response times
    Result for Scenario A with file size of 8MB
    First attempt: SOA composite faults with database transaction related error, solved by increasing JTA timeout
    Second attempt: Flawless execution, but file transfer took over 100 seconds to complete. This is very poor performance and my suspicions are that this cannot be the expected behaviour, but I dont know the internal workings of the SOA composite and why its taking this long.
    Result for Scenario B:
    The OSB business service does not accept/recognize the SAML policy in the WSDL and suggests to configure OWSM policies manually, but OWSM policy in OSB does not have the wsmtom policy. Regardless of this, any permutation of MTOM + WSS security in this integration scenario either did not work outright or MTOM optimization was not happening ie binary data was materalizing in the message body.
    I have only about 3 weeks left to implement a viable solution and the closest ive come to a solution is Scenario A but that +100 second response time for an 8MB file is really worrying.
    I would appreciate any level of guidance, recommendations or suggestions as to how I go about tackling this problem.
    Thanks
    regards,
    Johnny

    I think this is due to the underlying mechanism of weblogic classloading..
    You can contact oracle support @ https://support.oracle.com to report issues. Roughly this is the process .
    1- get the Oracle Customer Support Identifier (CSI) for the client you are working for.
    2- Create a user profile quoting the CSI. This will send an approval request to oracle support admins at your client.
    3- Get the oracle support admins at your client site to approve your request for support access.
    4-Once they approve , you can access the support site and raise service requests.

  • Payload Streaming (for handling large payload) in Oracle JCA Adapter for AQ

    Hi All-
    Oracle Documentation indicates that it supports Payload Streaming in Oracle JCA Adapter for AQ. Link http://download.oracle.com/docs/cd/E14571_01/integration.1111/e10231/adptr_aq.htm#CBAIAABF
    However when I tried configuring an AQ Adapter in Jdeveloper, I was not able to see the check box for enabling Payload Streaming.
    Do we have to manually update the .jca file to add the property "EnableStreaming" in the AQ Adapter Activation Spec? Is it supported and is it going to work?
    What is the Message Size limit that the AQ Adapter can handle?
    Please let me know.
    Thanks,
    Dibya

    If the StreamPayload property does not exist, then the default value false is assumed.
    <activation-spec className="oracle.tip.adapter.aq.inbound.AQDequeueActivationSpec">
    <property name="QueueName" value="RAW_IN_QUEUE"/>
    <property name="DatabaseSchema" value="SCOTT"/>
    <property name="StreamPayload" value="true"/>
    </activation-spec>
    you can add <property name="StreamPayload" value="true"/>
    to the .jca file but rememeber This property is applicable when processing Raw messages, XMLType messages, and ADT type messages for which a payload is specified though an ADT attribute.

  • Code Optimization for handling large volume of data

    Hi All,
    We are facing a problem when executing a report... lot of time is taken to execute..... Many a times the program is terminated with a dump that "Timeout :Program terminated because of endless loop".
    the internal table which has to looped has more than 8.5 lac records...
    and for each run of loop there are two read and one select statement (unavoidable)...,,
    (We have followed almost all the optimization techniques,,,,)
    Please suggest if you have any idea as to ... what can be done in such situation....
    Thanks and Regards,
    Sushil Hadge.

    Hi Martin,
    Following is the piece of code.....
    SELECT bukrs gpart hkont waers
          FROM dfkkop
          INTO TABLE it_dfkkop
          WHERE bukrs = p_bukrs AND bldat IN so_bldat AND hkont IN so_hkont.
    SORT it_dfkkop BY gpart.
    Loop at it_dfkkop into wa_dfkkop.
    <Read statement>
    <Read Statement>
    ON CHANGE OF wa_dfkkop-gpart.
    SELECT gpart hkont waers betrw FROM dfkkop INTO TABLE it_subtot WHERE hkont = wa_dfkkop-hkont AND gpart = wa_dfkkop-gpart.
          IF it_subtot IS NOT INITIAL.
            LOOP AT it_subtot INTO wa_subtot.
              v_sum = v_sum + wa_subtot-betrw.
            ENDLOOP.
         Endif. 
    Endon.
    Endloop.
    Please suggest if this can be improved in some way....
    Thanks ,
    Sushil
    Edited by: Sushil Hadge on Jun 4, 2008 3:12 PM

  • Data Services Designer 14 - Large Log files

    Hello,
    we're running several jobs with the Data Services Designer 14, all works fine.
    But today a problem occur:
    The Data Designer on a client produces after finishing a big job a very large log file in the Data Services Designer folder with 8 GB.
    Is it possible to delete these log files automatically or restrict the maximum size of the created log files in the designer?
    What's the best way?
    Thanks!

    You can set to automatically delete the log files based on number of days.
    I have done this in XI 3.2, but as per the document, this is how it can be done in DS 14.0.
    In DS 14.0, this is handled in CMC.
    1. Log into the Central Management Console (CMC) as a user with administrative rights to the Data Services application.
    2. Go to the u201CApplicationsu201D management area of the CMC. The u201CApplicationsu201D dialog box appears.
    3. Right-click the Data Services application and select Settings. The u201CSettingsu201D dialog box appears.
    4. In the Job Server Log Retention Period box, enter the number of days that you want to retain the following:
    u2022 Historical batch job error, trace, and monitor logs
    u2022 Current service provider trace and error logs
    u2022 Current and historical Access Server logs
    The software deletes all log files beyond this period. For example:
    u2022 If you enter 1, then the software displays the logs for today only. After 12:00 AM, these logs clear and the software begins saving logs for the next day.
    u2022 If you enter 0, then no logs are maintained.
    u2022 If you enter -1, then no logs are deleted.
    Regards,
    Suneer.

  • Is this the best design for asynchronous notifications (such as email)? Current design uses Web Site, Azure Service Bus Queue, Table Storage and Cloud Service Worker Role.

    I am asking for feedback on this design. Here is an example user story:
    As a group admin on the website I want to be notified when a user in my group uploads a file to the group.
    Easiest solution would be that in the code handling the upload, we just directly create an email message in there and send it. However, this seems like it isn't really the appropriate level of separation of concerns, so instead we are thinking to have a separate
    worker process which does nothing but send notifications. So, the website in the upload code handles receiving the file, extracting some metadata from it (like filename) and writing this to the database. As soon as it is done handling the file upload it then
    does two things: Writes the details of the notification to be sent (such as subject, filename, etc...) to a dedicated "notification" table and also creates a message in a queue which the notification sending worker process monitors. The entire sequence
    is shown in the diagram below.
    My questions are: Do you see any drawbacks in this design? Is there a better design? The team wants to use Azure Worker Roles, Queues and Table storage. Is it the right call to use these components or is this design unnecessarily complex? Quality attribute
    requirements are that it is easy to code, easy to maintain, easy to debug at runtime, auditable (history is available of when notifications were sent, etc...), monitor-able. Any other quality attributes you think we should be designing for?
    More info:
    We are creating a cloud application (in Azure) in which there are at least 2 components. The first is the "source" component (for example a UI / website) in which some action happens or some condition is met that triggers a second component or "worker"
    to perform some job. These jobs have details or metadata associated with them which we plan to store in Azure Table Storage. Here is the pattern we are considering:
    Steps:
    Condition for job met.
    Source writes job details to table.
    Source puts job in queue.
    Asynchronously:
    Worker accepts job from queue.
    Worker Records DateTimeStarted in table.
    Queue marks job marked as "in progress".
    Worker performs job.
    Worker updates table with details (including DateTimeCompleted).
    Worker reports completion to queue.
    Job deleted from queue.
    Please comment and let me know if I have this right, or if there is some better pattern. For example sake, consider the work to be "sending a notification" such as an email whose template fields are filled from the "details" mentioned in
    the pattern.

    Hi,
    Thanks for your posting.
    This development mode can exclude some errors, such as the file upload complete at the same time... from my experience, this is a good choice to achieve the goal.
    Best Regards,
    Jambor  
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Data Services Designer - Error when pulling large source tables

    Hi all,
    I have been trying yo load data from an SAP table - BSIS into MS SQL server database using BO Data Services Designer XI 3.2.  It is a simple data flow with one souce table (BSIS).
    When we execute the job, it says what is mentioned below:
    *"Process to execute Dataflow is started"*
    *"Cache statistics determined that DataFlow uses <0> caches with a total use of <0> bytes. This is less than the virtual memory <1609564160> bytes available for caches. Statistics is switching the cache type to IN MEMORY."*
    *"Dataflow using IN MEMORY cache."
    It stays there for a while and says "Dataflow terminated due to error"
    In the error window, it says DF received a bad system message.
    Does not specify the error... It asks to contact the customer support with error logs, ATL files and DDL scripts.
    Can anyone help me out???
    Thank you and regards,
    Suneer.

    Hi,
    please do not post the short dump in this forum.
    I blieve the system will read from table dt_iobj_dest.
    The problem is that 0FISCPER, 0FISCYEAR and 0FISCVARNT  are not registered.
    You can register the infoobject 0FISCYEAR and 0FISCVARNT
    retroactively: 
    1) se24, CL_UG_BW_FIELD_MAPPING                
    2) F8 -> test environment                      
    3) GET_INSTANCE, do not use IT_FIELD_RESTRICT  
    4) IF_UG_BW_MAPPING_SERVICES                   
    5) REGISTER_INFO_OBJECT                        
    6) specify  I_RFCDEST                          
                I_INFOOBJECT                       
       attention: I_RFCDEST is case-sensitive      
    kind reg.
    Michael
    ps. please do not forget to assign points.

  • Handling Large Files in XI

    we  have designed couple of integration process for the project. we need to handle messages more than 5MB and the message rates are approximately 100 Msgs/Hr - and we did the tuning as per the tuning guide . we are now facing the following issues
    1. Jco Connection Failes Error Occurs whenever large size of files are handled by XI
    2. ICM_HTTP_INTERNAL ERROR Occurs whenever large size of files are handled by XI
    Does anyone has solution for these issues ?

    I am sure that you already have checked sizing requirements for large sized messages. If not, it might be worth looking at this:
    The memory consumption of XI depends on the number of processes running in parallel and the size of the message.
    In general, an extra sizing for XI memory consumption is not required. The total memory of the SAP Web Application Server should be sufficient except in the case of large messages (>1MB).
    To determine the memory consumption for processing large messages, you can use the following rules of thumb:
      Allocate 3 MB per process (for example, the number of parallel messages per second may be an indicator)
      Allocate 4 kB per 1kB of message size in the asynchronous case or 9 kB per 1kB message size in the synchronous case
      Example: asynchronous concurrent processing of 10 messages with a size of 1MB requires 70 MB of memory
    (3MB + 4 * 1MB) * 10 = 70 MB With mapping or content-based routing where an internal representation of the message payload may be necessary, the memory requirements can be much higher (possibly exceeding 20 kBytes per 1kByte
    message, depending on the type of mapping).
    The size of the largest message thus depends mainly on the size of the available main memory. On a normal 32Bit operating system, there is an upper boundary of approximately 1.5 to 2 GByte per process, limiting the respective largest message size.
    Hope it helps.
    Cheers, Sachin K

  • Handling large messages with MQ JMS sender adapter

    Hi.
    Im having trouble handling large messages with a MQ JMS sender adapter.
    The messages are around 35-40MB.
    Are there any settings I can ajust to make the communication channel work?
    Error message is:
    A channel error occurred. The detailed error (if any) : JMS error:MQJMS2002: failed to get message from MQ queue, Linked error:MQJE001: Completion Code 2, Reason 2010, Error Code:MQJMS2002
    The communication channel works fine with small messages!
    Im on SAP PI 7.11, MQ Driver is version 6.
    Best Regards...
    Peter

    The problem solved itself, when the MQ server crashed and restarted.
    I did find a note that might could have been useful:
    Note 1258335 - Tuning the JMS service for large messages or many consumers
    A relevant post as well: http://forums.sdn.sap.com/thread.jspa?threadID=1550399

Maybe you are looking for

  • Key mapping missing - MDM5.5 client

    Hi   I have successfully imported Materials, when see the same in the client ( right click the material and edit key mapping ) key mapping field is missing . Kindly clarify me what could be the reason for the same. thanks Alexander

  • Find/Change: mission impossible? [CS3]

    I want to assign a paragraph style to all occurrences of a 'Head1' style like this:<br /><br />1.1  The situation on the ground<br /><br />It's easy to find ^9.^9^t  (number, period, number, tab) but if I want to ensure i don't wrongly style the Head

  • BOBJ Enterprise on SAP Netweaver 7.1

    Hi All We are planning to implement BOBJ Enterprise XI 3.1 SP2 along with SAP BW As per the supported platforms it seems that BO App server and BO web applications are supported on SAP Netweaver 7.1 Did any one implement BOBJ on SAP Netweaver platfro

  • Back to Buttons and Icons.

    With someone's help on the this forum I figured out how to create Buttons with icons in AcrobatX.  In Acrobat 9 I could place a PNG image into a PDF then click the button icon  and size it to fit the icon.  And when placed on a webpage would show up

  • Funds Management - Release Budget

    Hi, Regarding Funds Management:- I have activated Funds Management and configured completely. But I want a Budget Type which i need to release the Budget on Monthly Basis. Will the Standard allow this ? If so what is the Configuration Needed and What