Alternative to FV70 Transaction

Hi All,
I need an alternate transaction for FV70 where i need to input Spl GL indicator 1(other account receivables).Because when i use FV70 it asks for a customization where i have to enter the posting key in OBXW for AGX trans.I want to avoid this additional customization.
So i want somethere transaction code to post this type of journal.
Is there any alternative for this FV70 transaction?
Thank You in advance
Regards
Andrew

Hi Ashish,
I didnt mean that the transaction code doesnt  have a Special GL indicator,it has but when i enter the posting key 09(Special GL debit) and Spl GL indicator 1(Other Account Receivables) it gives a error message
"Special G/L transactions not defined for bills/exch.and down pmnts. Message no. FP 030"
but this kind of journal is possible in FV70 if i do the customization of posting key in OBXW and AGX trans (Customer Invoice with Spl GL ind)
Hope you understand the problem.
Regards
Andrew

Similar Messages

  • FB70 and fv70 transaction

    Hi All,
    I am using fb70 transaction for posting customer invoice.But in my requirements they gave,
    AR(ACCOUNT RECEIVABLE) Users should have the ability to make changes if need be to the Invoice in park mode prior to Posting (Transaction Code FV70).But My report is running in background mode.
    My question is how to find which document is going to park and which document going to be post.
    Please help me.
    regards,
    Rakesh.

    It sounds like you need to park them all and let the users decide if they want to make changes before posting.
    Rob

  • Non GUI alternative to PB10 Transaction (infpotypes 22 and 24)

    Hi guys,
    This is a java foreigner in the ABAP land.
    I hope I am asking the correct question.
    This is the situation:
    In order to send a "candidates resume" (curriculum) in to SAP, the ABAP developers made for me a RFC which uses the transaction PB10 (applicant master data).
    The RFC when tested in the SAP GUI works fine.
    I wrote a java client which calls the RFC, send the parameters and expect the results. This client uses JCO to access SAP.
    But, from the java client the RFC works PARTIALLY,
    it doesnt fill the education a qualifications (infotypes 22 and 24), and throws error message:
    "Exception condition "CNTL_ERROR" raised"
    It seem there is an GUI missed, but from JCO I cannot use GUI.
    So, I need a non-GUI alternative transaction to the PB10, which writes the data in the infotype 22 and 24.
    So, the question here is:
    Does somebody knows a non-GUI alternative to the PB10 in order to write the infotypes 22 and 24??
    Thanks in advance,
    Luis

    i do have a similar requirement ... from non sap system i am suppose to pass data which saves in table connected to pb10..can you send the RFC developed for your requirement.. it would be helpful for me..
    thank you

  • Using Alternative UOM in transaction VA21

    We want to use alternative UOM for Hydrocarbon Product during creation of a quotation using transaction VA21 and document type QT so that user can change the Alternative UOM say in KG or L15 or change the temperature and density of the material.
    Grateful if anyone provides a solution.

    Hi!
    Please solve:
    In Material Master for material HSD02, Base UOM is L15 and Sales Unit is KL.
    Earlier in Additional Data, 1 KL <= 1000 L15 is stored.
    Now I want to chage the conversion factor as 1 KL <= 987.9 L15 or 10 KL <= 9879 L15.
    Hence I wanted to delete the earlier entry with 1 KL <= 1000 L15  and tried to maintain 10 KL <= 9879 L15.
    This is not allowed. On saving the change again earlier entry 1 KL <= 1000 L15 appears in Additional Data -> Unit of Measure.
    Thanks in anticipation.
    Pranjal

  • Can we change the alternative COA after transaction data posting

    Hi Friends,
    My client has wrongly updated the GL account in alternative Chart of accounts column in Company code segment. They didnt realise and they posted some transaction data as well.
    Today they realised and tried to change the alternative COA and it showed an error message "GL account balance is not zerowised". They tried to clear the GL account balance, but couldnt solve the issue.
    Please let me know is there any other way to change the alternative COA after entering the transaction data.
    Thanks,
    Dwarak

    thnks

  • Alternative for FV60 transaction

    Hi All,
    I need to use an alternative method/process of achieving the functionality of parking an Invoice as done in FV60.
    I tried using FM PRELIMINARY_POSTING_FB01 for the same, ISSUE: its not creating the bseg records.
    Tried FM POST_DOCUMENT, though its creating bkpf and bseg records, the split of records is not happening.
    Can any of you suggest an alternative process of parking an FI invoice through a FM etc...as is done in transaction FV60
    Thanks
    Raj

    Hi
    Thanks for your reply but this BAPI you suggested is for transaction MIR7, but not for FV60.
    I need an alternative for FV60.
    Regards
    Raj

  • Alternatives for enjoy transaction (ME21N) Recording

    Hi all,
    I need to populate data in the enjoy transaction ME21N, but we can not use BDC for populating data. Is ther any other alternate solution where we can populate the required fields and user will manually save the document. I just need to populate data in the transaction ME21N
    Regards,
    Gautham

    Hi,
    For ME21N : use BAPI: BAPI_PO_CREATE1
    For ME21  : use BAPI: BAPI_PO_CREATE/IDoc: PORDCR
    For ME22N: use BAPI: BAPI_POCHANGE.
    Hope this helps.
    Regds, Murugesh

  • Creating Purchasing info records using transaction ME11

    Hello Friends,
    I am trying to post Purchase info records through ME11.
    Everything is fine but we are not able to record the Conditions tab - i.e, Condition Qty & scale Quantities,
    Can anybody Please help is there any BAPI or any Function module to Post Purchase Info records,
    i.e, alternative to ME11 transaction.
    Thanks in advance,
    Regards,
    Phaneendra

    Hello Rahul,
    Thanks for your reply, Here i am using the same.
    But i am not able to update data by konp & konm structures.
    Do you have any idea on this ?
    ie, in flat file only one line is read by the Program for KONM Structure .Please help.
    Regards,
    Phaneendra
    Edited by: phaneendra punukollu on Feb 8, 2010 4:36 PM

  • How to call CJ2ON transaction

    Hi All,
    How to call CJ20N transaction from custom transaction/custom program.
    generally some transaction having OPEN functionality for open other project,how to handle these type of transaction.
    Help me.
    Thanks,
    Srinivas Manai

    Hi,
    <b>You can neither do CALL TRANSACTION nor use BDC session method for transaction CJ20N</b>. This is a enjoy transaction and it works on Object oriented technology. These kind of transactions ( ME21N, ME22N, KE23N, ME51N etc..) does not support CALL TRANSACTION.
    Use transaction <b>CJ20</b> instead.
    Let me know your exact requirement, there may be an alternative to CALL TRANSACTION or BDC.
    Regards,
    RS

  • About transaction logs

    Can you tell me about transaction log space? how does it gets full? How is it related to performance?

    Hi,
    Monitoring the SAP Log Disk
    Use
    The size of the transaction log must be checked regularly to work out how much free space is available on the log disk. There should always be enough free space to allow the next file extension. When the SAP system has been installed the autogrow increment is set. At least the size of this increment should be available on the log disk to permit the next file extension. If less space is available and the transaction log file fills up, the SAP system will come to a standstill.
    Ideally, the transaction log should never be filled to more than 60-70%. If the transaction log regularly exceeds this level between 2 transaction log backups, the transaction log must be saved at more frequent time intervals.
    The size of the log can be assessed on the basis of information given for completed backups in the SAP transaction for Backup and Restore Information.
    Procedure:
           1.      To access the transaction for Backup Restore Information  choose CCMS ® DB Administration ® Backup logs.
    Alternatively, enter the transaction code DB12.
    The initial screen of the monitor CCMS Monitoring Tool u2013 DB12 (Backup Restore Information) appears. 
           2.      Choose Backup history and then Logs Backup.
           3.      A result list appears. Find the largest transaction log backup of the past week. Select a row and then History info to find out the number of pages that were processed during the backup. To work out the amount of space used in the transaction log, multiply the number of dumped pages by 8 KB. You can then work out how much free space is left on the transaction log disk.
    If you use a RAID1 disk system exclusively for the SAP transaction log and create hourly log backups, you will rarely encounter space problems. The SAP log file is initially created with a size of 1 GB. The smallest disk normally has 9 GB space and the log file can therefore grow to 9 GB.
    Hope it Helps
    Srini

  • Cost of different alternative

    Hi SAP gurus,
    I have a product which contain 4 alternative.
    Now I want to know which one is of low cost.
    So every time I have to make one alternative active & then run CK11N transaction. (This is time consuming )
    Is there any way through which I can easy get the Cost of different alternative in single transaction.
    Best Regards,
    Parag Save

    Dear Parag,
    I dont think it's required to keep 01 active status for the alternative BOM which is to costed and remaining BOM with 02 status.
    Instead in CK11N,after entering the costing,variant,plant,material,click on the quantity structure tab page and here you can
    directly assign the alternative BOM and routing or else production version for costing purpose.
    Check and revert back.
    Regards
    Mangalraj.S

  • Changing Isolation Level Mid-Transaction

    Hi,
    I have a SS bean which, within a single container managed transaction, makes numerous
    database accesses. Under high load, we start having serious contention issues
    on our MS SQL server database. In order to reduce these issues, I would like
    to reduce my isolation requirements in some of the steps of the transaction.
    To my knowledge, there are two ways to achieve this: a) specify isolation at the
    connection level, or b) use locking hints such as NOLOCK or ROWLOCK in the SQL
    statements. My questions are:
    1) If all db access is done within a single tx, can the isolation level be changed
    back and forth?
    2) Is it best to set the isolation level at the JDBC level or to use the MS SQL
    locking hints?
    Is there any other solution I'm missing?
    Thanks,
    Sebastien

    Galen Boyer wrote:
    On Sun, 28 Mar 2004, [email protected] wrote:
    Galen Boyer wrote:
    On Wed, 24 Mar 2004, [email protected] wrote:
    Oracle's serializable isolation level doesn't offer what most
    customers I've seen expect it to offer. They typically expect
    that a serializable transaction will block any read-data from
    being altered during the transaction, and oracle doesn't do
    that.I haven't implemented WEB systems that employ anything but
    the default concurrency control, because a web transaction is
    usually very long running and therefore holding a connection
    open during its life is unscalable. But, your statement did
    make me curious. I tried a quick test case. IN ONE SQLPLUS
    SESSION: SQL> alter session set isolation_level =
    serializable; SQL> select * from t1; ID FL ---------- -- 1 AA
    2 BB 3 CC NOW, IN ANOTHER SQLPLUS SESSION: SQL> update t1 set
    fld = 'YY' where id = 1; 1 row updated. SQL> commit; Commit
    complete. Now, back to the previous session. SQL> select *
    from t1; ID FL ---------- -- 1 AA 2 BB 3 CC So, your
    statement is incorrect.Hi, and thank you for the diligence to explore. No, actually
    you proved my point. If you did that with SQLServer or Sybase,
    your second session's update would have blocked until you
    committed your first session's transaction. Yes, but this doesn't have anything to do with serializable.
    This is the weak behaviour of those systems that say writers can
    block readers.Weak or strong, depending on the customer point of view. It does guarantee
    that the locking tx can continue, and read the real data, and eventually change
    it, if necessary without fear of blockage by another tx etc.
    In your example, you were able to change and commit the real
    data out from under the first, serializable transaction. The
    reason why your first transaction is still able to 'see the old
    value' after the second tx committed, is not because it's
    really the truth (else why did oracle allow you to commit the
    other session?). What you're seeing in the first transaction's
    repeat read is an obsolete copy of the data that the DBMS
    made when you first read it. Yes, this is true.
    Oracle copied that data at that time into the per-table,
    statically defined space that Tom spoke about. Until you commit
    that first transaction, some other session could drop the whole
    table and you'd never know it.This is incorrect.Thanks. Point taken. It is true that you could have done a complete delete
    of all rows in the table though..., correct?
    That's the fast-and-loose way oracle implements
    repeatable-read! My point is that almost everyone trying to
    serialize transactions wants the real data not to
    change. Okay, then you have to lock whatever you read, completely.
    SELECT FOR UPDATE will do this for your customers, but
    serializable won't. Is this the standard definition of
    serializable of just customer expectation of it? AFAIU,
    serializable protects you from overriding already committed
    data.The definition of serializable is loose enough to allow
    oracle's implementation, but non-changing relevant data is
    a typically understood hope for serializable. Serializable
    transactions typically involve reading and writing *only
    already committed data*. Only DIRTY_READ allows any access to
    pre-committed data. The point is that people assume that a
    serializable transaction will not have any of it's data re
    committed, ie: altered by some other tx, during the serializable
    tx.
    Oracle's rationale for allowing your example is the semantic
    arguement that in spite of the fact that your first transaction
    started first, and could continue indefinitely assuming it was
    still reading AA, BB, CC from that table, because even though
    the second transaction started later, the two transactions *so
    far*, could have been serialized. I believe they rationalize it by saying that the state of the
    data at the time the transaction started is the state throughout
    the transaction.Yes, but the customer assumes that the data is the data. The customer
    typically has no interest in a copy of the data staying the same
    throughout the transaction.
    Ie: If the second tx had started after your first had
    committed, everything would have been the same. This is true!
    However, depending on what your first tx goes on to do,
    depending on what assumptions it makes about the supposedly
    still current contents of that table, it may ether be wrong, or
    eventually do something that makes the two transactions
    inconsistent so they couldn't have been serialized. It is only
    at this later point that the first long-running transaction
    will be told "Oooops. This tx could not be serialized. Please
    start all over again". Other DBMSes will completely prevent
    that from happening. Their value is that when you say 'commit',
    there is almost no possibility of the commit failing. But this isn't the argument against Oracle. The unable to
    serialize doesn't happen at commit, it happens at write of
    already changed data. You don't have to wait until issuing
    commit, you just have to wait until you update the row already
    changed. But, yes, that can be longer than you might wish it to
    be. True. Unfortunately the typical application writer logic may
    do stuff which never changes the read data directly, but makes
    changes that are implicitly valid only when the read data is
    as it was read. Sometimes the logic is conditional so it may never
    write anything, but may depend on that read data staying the same.
    The issue is that some logic wants truely serialized transactions,
    which block each other on entry to the transaction, and with
    lots of DBMSes, the serializable isolation level allows the
    serialization to start with a read. Oracle provides "FOR UPDATE"
    which can supply this. It is just that most people don't know
    they need it.
    With Oracle and serializable, 'you pay your money and take your
    chances'. You don't lose your money, but you may lose a lot of
    time because of the deferred checking of serializable
    guarantees.
    Other than that, the clunky way that oracle saves temporary
    transaction-bookkeeping data in statically- defined per-table
    space causes odd problems we have to explain, such as when a
    complicated query requires more of this memory than has been
    alloted to the table(s) the DBMS will throw an exception
    saying it can't serialize the transaction. This can occur even
    if there is only one user logged into the DBMS.This one I thought was probably solved by database settings,
    so I did a quick search, and Tom Kyte was the first link I
    clicked and he seems to have dealt with this issue before.
    http://tinyurl.com/3xcb7 HE WRITES: serializable will give you
    repeatable read. Make sure you test lots with this, playing
    with the initrans on the objects to avoid the "cannot
    serialize access" errors you will get otherwise (in other
    databases, you will get "deadlocks", in Oracle "cannot
    serialize access") I would bet working with some DBAs, you
    could have gotten past the issues your client was having as
    you described above.Oh, yes, the workaround every time this occurs with another
    customer is to have them bump up the amount of that
    statically-defined memory. Yes, this is what I'm saying.
    This could be avoided if oracle implemented a dynamically
    self-adjusting DBMS-wide pool of short-term memory, or used
    more complex actual transaction logging. ? I think you are discounting just how complex their logging
    is. Well, it's not the logging that is too complicated, but rather
    too simple. The logging is just an alternative source of memory
    to use for intra-transaction bookkeeping. I'm just criticising
    the too-simpleminded fixed-per-table scratch memory for stale-
    read-data-fake-repeatable-read stuff. Clearly they could grow and
    release memory as needed for this.
    This issue is more just a weakness in oracle, rather than a
    deception, except that the error message becomes
    laughable/puzzling that the DBMS "cannot serialize a
    transaction" when there are no other transactions going on.Okay, the error message isn't all that great for this situation.
    I'm sure there are all sorts of cases where other DBMS's have
    laughable error messages. Have you submitted a TAR?Yes. Long ago! No one was interested in splitting the current
    message into two alternative messages:
    "This transaction has just become unserializable because
    of data changes we allowed some other transaction to do"
    or
    "We ran out of a fixed amount of scratch memory we associated
    with table XYZ during your transaction. There were no other
    related transactions (or maybe even users of the DBMS) at this
    time, so all you need to do to succeed in future is to have
    your DBA reconfigure this scratch memory to accomodate as much
    as we may need for this or any future transaction."
    I am definitely not an Oracle expert. If you can describe for
    me any application design that would benefit from Oracle's
    implementation of serializable isolation level, I'd be
    grateful. There may well be such.As I've said, I've been doing web apps for awhile now, and
    I'm not sure these lend themselves to that isolation level.
    Most web "transactions" involve client think-time which would
    mean holding a database connection, which would be the death
    of a web app.Oh absolutely. No transaction, even at default isolation,
    should involve human time if you want a generically scaleable
    system. But even with a to-think-time transaction, there is
    definitely cases where read-data are required to stay as-is for
    the duration. Typically DBMSes ensure this during
    repeatable-read and serializable isolation levels. For those
    demanding in-the-know customers, oracle provided the select
    "FOR UPDATE" workaround.Yep. I concur here. I just think you are singing the praises of
    other DBMS's, because of the way they implement serializable,
    when their implementations are really based on something that the
    Oracle corp believes is a fundamental weakness in their
    architecture, "Writers block readers". In Oracle, this never
    happens, and is probably one of the biggest reasons it is as
    world-class as it is, but then its behaviour on serializable
    makes you resort to SELECT FOR UPDATE. For me, the trade-off is
    easily accepted.Well, yes and no. Other DBMSes certainly have their share of faults.
    I am not critical only of oracle. If one starts with Oracle, and
    works from the start with their performance arcthitecture, you can
    certainly do well. I am only commenting on the common assumptions
    of migrators to oracle from many other DBMSes, who typically share
    assumptions of transactional integrity of read-data, and are surprised.
    If you know Oracle, you can (mostly) do everything, and well. It is
    not fundamentally worse, just different than most others. I have had
    major beefs about the oracle approach. For years, there was TAR about
    oracle's serializable isolation level *silently allowing partial
    transactions to commit*. This had to do with tx's that inserted a row,
    then updated it, all in the one tx. If you were just lucky enough
    to have the insert cause a page split in the index, the DBMS would
    use the old pre-split page to find the newly-inserted row for the
    update, and needless to say, wouldn't find it, so the update merrily
    updated zero rows! The support guy I talked to once said the developers
    wouldn't fix it "because it'd be hard". The bug request was marked
    internally as "must fix next release" and oracle updated this record
    for 4 successive releases to set the "next release" field to the next
    release! They then 'fixed' it to throw the 'cannot serialize' exception.
    They have finally really fixed it.( bug #440317 ) in case you can
    access the history. Back in 2000, Tom Kyte reproduced it in 7.3.4,
    8.0.3, 8.0.6 and 8.1.5.
    Now my beef is with their implementation of XA and what data they
    lock for in-doubt transactions (those that have done the prepare, but
    have not yet gotten a commit). Oracle's over-simple logging/locking is
    currently locking pages instead of rows! This is almost like Sybase's
    fatal failure of page-level locking. There can be logically unrelated data
    on those pages, that is blocked indefinitely from other equally
    unrelated transactions until the in-doubt tx is resolved. Our TAR has
    gotten a "We would have to completely rewrite our locking/logging to
    fix this, so it's your fault" response. They insist that the customer
    should know to configure their tables so there is only one datarow per
    page.
    So for historical and current reasons, I believe Oracle is absolutely
    the dominant DBMS, and a winner in the market, but got there by being first,
    sold well, and by being good enough. I wish there were more real market
    competition, and user pressure. Then oracle and other DBMS vendors would
    be quicker to make the product better.
    Joe

  • Decision between multiple alternatives -question

    Hello all,
    I have some doubts about process Decision between Multiple Alternatives, from RSPC transaction.
    The problem is like this:
    1. I have one main process chain.
    2. I have the process decision between multiple alternatives, in this main chain, after start.
    I choosed formula and I put an if (as in the code below). Then I put the WORKINGDAY_MONTH
    In it I want to test to have the 19-th working day of the current month , factory calender ' 01'.
    If I have this day I want , I want to raise an event.
    IF( WORKINGDAY_MONTH( Current Date, '01', '' ) = 19, 'ZEVENT_WD', 'ZEVENT_ERROR' )
    This event would be usefull so I can start with it a second chain(inserted in the main chain).
    The system says (when I check my formula) that it is sintactically correct, but incomplete.
    Any clues why this message apears ?
    I don't know if it is ok to put the events like that in my IF (one to start the subchain and one as error).
    Thank you a lot.

    Hi,
    The decision process type allows you to determine a set of conditions
    For more info go though the below link , as they have explained step by step with screenshots
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/900be605-7b59-2b10-c6a8-c4f7b2d98bae&overridelayout=true
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/301fb325-9d90-2c10-7199-89fc7b5a17b9&overridelayout=true
    Regards,
    Marasa.

  • Alternative to SE37

    Hello SAP comunity,
    I want to find out if there is any alternative to the transaction SE37 to execute BAPIs.
    Thanks.

    Locally you can use the authorizations described in SAP note 587410 to restrict the ability to single test function modules. That is effectively the same as having SAP_ALL so you must be very restrictive with it.
    Remotely you can secure BAPIs via restrictions to authorization object S_RFC and the corresponding application authorizations needed (this is one of my specialities actually - see SAP Note 1682316). As of release 7.41 on recent SP levels and kernels, you can additionally use transaction UCONCOCKPIT to deactivate the remote enabled availability of RFC function modules if they are not meant to be called from the outside.
    You have definitely opened a can of worms here for yourself! Good luck :-)
    Cheers,
    Julius

  • Issue in workflow related to PO

    Hi,
    A monthly job is scheduled to create PO's in SAP by using an Unix file in the application server which has the required details. Workflow would be triggered for all the error PO's after the job run. The workitem recieved displays all the error PO's on a screen and the user is given the option to reprocess the error PO's from the screen itself once the errors are rectified. This whole requirement is done using the files in the application server.
    The main problem which i'm facing is that the file into which the error PO's are written into the file for this month run would be overwritten with the other records for the next month run. User may take months to reprocess the PO's. so my requirement is that when the user clicks the workitem of this month run then the screen should display the relevant records and when he clicks the next months workitem he sh'd be able to view the error PO's of the next month. As of now, the old workitem displays the new error PO's as the contents of the file are overwritten for the next month run. Also i don't think the creation of a new file for every month is not a better option. suppose if i do that then how would i link the workitem to the file when the execution is done with the same program.
    Please help me out as the issue is a bit urgent
    Regards,
    Sam

    In order to do that you have to explain how your solution works.
    How is the workflow started? The date should be passed in the workflow container, and you should thus add a new importing container element for it. Alternatively, if the previous is not feasible, can you assume that the workflow is started the same date (or with a fixed offset) as the file is created. If so you can set the date in the workflow instead of passing it as a parameter.
    The date will of course have to be passed on to your step, and from your step to the BOR method you execute. This however, is just a matter of binding and importing elements, and I assume you are able to solve that on your own.
    What is this report you have mentioned? SAP standard? I assumed it wasn't. If it is a customer report, you will of course need a new selection screen parameter, and when your BOR method submits the report (or alternatively calls the transaction) the date must be passed as one of the parameters.
    Hope that helps, but if it doesn't you have to be MUCH more specific in your questions. You have to explain what you are trying to do and what your starting point (current solution) is.

Maybe you are looking for

  • Hi... some code displays when i open a web page... i copied the info... can i submit and ask what is happening?

    150129... this has been displaying often when i open a webpage in firefox... i have to close the page & reopen it to get it to display without this... whats going on?? In address bar: https://sc1.checkpoint.com/dev/abine/scripts/inject.js Displayed:

  • How can I create a booklet with Word 2011?

    I've tried using Cocoabooklet and Create Booklet. I've also tried on both Word 2011 and Pages (I only have the trial). While Create Booklet works, I'd like to be able to make the pages actual size on word (I tried making the page 5.5x8.5 then setting

  • Windows phone 8.1 on Lumia 620

    I hv Lumia 620 can I update my phone software in windows 8.1 tell me how can I update.... Moderator's Note: The subject was amended.

  • Software Update under Lion Does not run correctly

    Hello Since few days i can't have a software update check successful on my Lion 10.7 Here is the log : 24/08/12 00:19:10,566 com.apple.SecurityServer: Failed to authorize right 'system.install.app-store-software' by client '/System/Library/PrivateFra

  • Help with AXiom anlong with jboss

    Hi I have some requirement Regarding AXIOM . Currently i am using Enterprise Application EJB2.1 with Jboss4.0.2 I am deploying my service in jboss which i think uses Axis2 internally i've heard that AXIOM has some advantage over Axis2 which are memor