Tree, msut be a seen across multiple transaction.

Hi,
After a considerable time i amable to develop a Tree in a dockig container and populate it.
Now the requirement is this tree mut be alive across mutiple Transactions.
Yes, i am calling docking container across two transactions. so the docking container is present. How can i make it so the the contents of the tree are not changed when jumping from one Transaction to ther Transaction.
Is there any way to hold an instance of the object in memory ? so i can hold Tree instance, somthing like export-import ???
All the suggestions are welcome.
Regards..

Using SHMA

Similar Messages

  • Implement Caching across multiple transactions in EJB3.0

    Hi,
    I need to implement caching of entities using JPA in EJB3.0. I am using Jboss 4.2.x for implementing this. The requirement is that the entities remain cached across different method calls.
    Regards,
    Deepak Dabas

    Good luck with that. There's likely caching solutions that work with JPA, for example OSCache springs to mind as far as caching is concerned.

  • Advice needed on designing schema to accomodate multiple transaction tables.

    Hi,
    The attached images shows my current schema. It consists of three transaction tables, a product table and a calendar table.
    - Background -
    The product table 'Q1 Data Set' contains all unique sales. In addition it also contains a number of columns by which I will later filter my pivot tables (e.g. whether the customer of the order is new/returning). This
    table also contains a column named 'DateOrdered',the date the order was originally placed (but not paid). 
    Each sale that is paid can be done so either in a single transaction, or across multiple transactions of different transaction types.
    An example of a sale  paid in multiple parts would be an order that has three transactions;
    one online (table 'trans_sagepay',
    one over the phone (table'trans_epdq')
    and another by card (table'trans_manual'). Furthermore there can be more than one transaction of each type for an sale.
    I have created measures which total the sales in each transaction table.
    Each transaction has a 'transaction_date' which is the date of that individual transaction.
    The calendar is simply a date table that has some friendly formatted columns for laying out pivot tables. An example column
    is FiscalMonthAbbrv which displays months similar to '(04) - January'
    to accommodate our fiscal year.
    - Problem -
    My problem is that I need the ability to create some tables that have the
    Date Ordered as the rows (listed by Year>Month), and I need to produce other tables that have
    Transaction Date as the rows.  
    Date ordered works fine, however the problem comes when I try and create a table based on the transaction date.
    With the current model seen in the attached image I cannot do it because the transactions have a relationship to
    Q1 Data Set and this table has the relationship with the
    Cal_Trans table. What happens in this scenario is that whenever I set the rows to be FiscalMonthAbbr  the values it displays is the transactions based not on transaction date but date ordered. To explain further:
    If I have an order A with a DateOrdered of 01/01/2014, but the transaction of £100 for that order was made later on the 05/01/2014, that £100 is incorrectly attributed to the 01/01/2014.
    To clarify the type of table I am aiming for see the mock-up below, I however NEED the ability to filter this table using columns found in
    Q1 Data Set.
    How can I make a schema so that I can use both DateOrdered and TransactionDate? I cannot combine all three transaction tables into one because each transaction type has columns unique to that specific type.

    Thanks for your suggestions, at the moment I don't have time to prepare a non-confidential copy of the data model, however I've taken one step forward, and one step back!
    First to clarify; to calculate sales of each transaction type I have created the following measures (I've given them friendly names):
    rev_cash
    rev_online
    rev_phone
    I then have a measure called rev_total which sums together the above measures. This allows me to calculate total revenue, but also to break it down by transaction type.
    With this in mind I revised the schema based on Visakh original suggestion to look like this:
    Using this I was able to produce a table which looked like that below:
    There were two issues with this:
    If I add the individual measures for each transaction type I get no errors, as soon as I add the 'Total Sales' measure on the end of the table I get an error "Relationship between tables may be needed". Seemingly however the numbers still calculate as expected
    - what is causing this error and how do I remove it?
    I CAN with this scenario filter by 'phd' which is a column in the Q1 Data Set table
    and it works as expected. I cannot however filter by all columns in this table, an example would be 'word count'.
    'Word Count' is a integer column, each record in the Q1 Data Set table has a value set for this column.
    I would like to take the column above and add a new measure called 'Total Word Count' (which I have created) which will calculate the total number of words in that monthly period. When I add this however I get the same relationship error as above and it
    display the word count total for the entire source tbale for every row of the pivot table.
    How can I get this schema working so that I can filter by word count and other columns from the product table. It Is confusing me how I can filter by one column, but not by a another in the same table.
    Also, I don't fully understand how I would add a second date table or how it would help my issues.
    Thanks very much for you help.

  • Using ATMI and tuxedo to institue distributed transactions across multiple DBs

    I am creating the framework for a given application that needs to ensure that data
    integrity is maintained spanning multiple databases not necessarily within an
    instance of weblogic. In other words, I need to basically have 2 phase commit
    "internet transactions" between a given coordinator and n participants without
    having any real knowlegde of their internal system.
    Originally I was thinking of using Weblogic but it appears that I may need to
    have all my particular data stores registered with my weblogic instance. This
    cannot be the case as I will not have access to that information for the other
    participating sytems.
    I next thought I would write my own TP...ouch. Everytime I get through another
    iteration I kept hitting the same issue of falling into an infinite loop trying
    to ensure that my coordinator and the set of participants were each able to perform
    the directed action.
    My next attempt has led me to the world of ATMI. Would ATMI be able to help me
    here. Granted I am using JAVA so I am assuming that I would have to use CORBA
    to make the calls but will ATMI enable me to truly manage and create distributed
    transactions across multiple databases. Please, any advice at all would be greatly
    appreciated.
    Thanks
    Chris

    Andy
    I will not have multiple instances of weblogic as I cannot enfore that
    the other participants involved in the transaction have weblogic as
    their application server. That being said, I may not have the choice
    but to use WTC.
    Does this make more sense?
    Andy Piper <[email protected]> wrote in message news:<[email protected]>...
    "Chris" <[email protected]> writes:
    I am creating the framework for a given application that needs to ensure that data
    integrity is maintained spanning multiple databases not necessarily within an
    instance of weblogic. In other words, I need to basically have 2 phase commit
    "internet transactions" between a given coordinator and n participants without
    having any real knowlegde of their internal system.
    Originally I was thinking of using Weblogic but it appears that I may need to
    have all my particular data stores registered with my weblogic instance. This
    cannot be the case as I will not have access to that information for the other
    participating sytems.I don't really understand this. From 6.0 onwards you can do 2PC
    between weblogic instances, so as long as the things you are calling
    are transaction (EJBs for instance) it should all work out fine.
    I next thought I would write my own TP...ouch. Everytime I get through another
    iteration I kept hitting the same issue of falling into an infinite loop trying
    to ensure that my coordinator and the set of participants were each able to perform
    the directed action.
    My next attempt has led me to the world of ATMI. Would ATMI be able to help me
    here. Granted I am using JAVA so I am assuming that I would have to use CORBA
    to make the calls but will ATMI enable me to truly manage and create distributed
    transactions across multiple databases. Please, any advice at all would be greatly
    appreciated.I don't see that ATMI would give you anything different. Transaction
    management Tux is fairly similar to WebLogic (it was written by the
    same people). If you are trying to do interposed transactions
    (i.e. multiple co-ordinators) then WTC would give you this but it is
    only a beta feature in WLS 6.1. Using Tux domain gateways would also
    give you interposed behaviour but would require you write your servers
    in C or C++ ....
    andy

  • Using ATMI and tuxedo for distrubuted transactions across multiple DBs

              I am creating the framework for a given application that needs to ensure that data
              integrity is maintained spanning multiple databases not necessarily within an
              instance of weblogic. In other words, I need to basically have 2 phase commit
              "internet transactions" between a given coordinator and n participants without
              having any real knowlegde of their internal system.
              Originally I was thinking of using Weblogic but it appears that I may need to
              have all my particular data stores registered with my weblogic instance. This
              cannot be the case as I will not have access to that information for the other
              participating sytems.
              I next thought I would write my own TP...ouch. Everytime I get through another
              iteration I kept hitting the same issue of falling into an infinite loop trying
              to ensure that my coordinator and the set of participants were each able to perform
              the directed action.
              My next attempt has led me to the world of ATMI. Would ATMI be able to help me
              here. Granted I am using JAVA so I am assuming that I would have to use CORBA
              to make the calls but will ATMI enable me to truly manage and create distributed
              transactions across multiple databases. Please, any advice at all would be greatly
              appreciated.
              Thanks
              Chris
              

              I am creating the framework for a given application that needs to ensure that data
              integrity is maintained spanning multiple databases not necessarily within an
              instance of weblogic. In other words, I need to basically have 2 phase commit
              "internet transactions" between a given coordinator and n participants without
              having any real knowlegde of their internal system.
              Originally I was thinking of using Weblogic but it appears that I may need to
              have all my particular data stores registered with my weblogic instance. This
              cannot be the case as I will not have access to that information for the other
              participating sytems.
              I next thought I would write my own TP...ouch. Everytime I get through another
              iteration I kept hitting the same issue of falling into an infinite loop trying
              to ensure that my coordinator and the set of participants were each able to perform
              the directed action.
              My next attempt has led me to the world of ATMI. Would ATMI be able to help me
              here. Granted I am using JAVA so I am assuming that I would have to use CORBA
              to make the calls but will ATMI enable me to truly manage and create distributed
              transactions across multiple databases. Please, any advice at all would be greatly
              appreciated.
              Thanks
              Chris
              

  • Maintaining Transaction Across Multiple JSP Pages

              Hi,
              I have a multi page Registration (3 steps). On each step data submited is taken
              to the database via an EJB component (Session Bean). How do I maintain a transaction
              across these JSP pages so that the data in the database is consistent. If a there
              is a problem in the 3rd step the data submitted in the first two steps should
              be rolled back.
              How do I maintain transaction across multiple pages.
              Regards
              -MohanRaj
              

    It will take from several minutes to a long time for a user to complete a multiple page registration process. Do you really have enough database connections that each concurrent user can hold on to one?
    Usually you cannot open more than 50-200 connections to a database at any given time.
    Remember that some users will abandon the registration process. Can you afford that their sessions holds a db conenction until the session times out?
    Consider changing your datamodel so you can run and commit a transaction at the end of processing the form data from a page. Immediately after the commit give the db connection back to the pool inside the app. server.
    It can be as simple as having a column in the database of type enum, with a set of values that shows how far in the registration process the registration has procesed.
    BTW. if you absolutely have to hold on to the db connection, you can stuff it into a session scoped attribute and it will be available on all pages.

  • APO gATP vs R/3 ATP - To check sales order ATP across multiple plants

    Hi There,
    I am trying to evaluate gATP functionality for SD sales orders.
    The primary requirement is to have sales order ATP checking take place across multiple plants.
    E.G.
    Sales order line is entered for qty 100
    60 is available in plant A, 40 is available in plant B
    System checks both plants and creates 2 lines - one for delivery from plant A and one for delivery from plant B
    (we are currently heading down the road of writing ABAP to do this 'multi-plant' check in R/3 but the more complex the requirements get the more interested I am in understanding more about APO/gATP)
    I would like to understand the benefit of implementing APO / gATP as opposed to using standard R/3 ATP and perhaps writing custom ABAP code to search for inventory across multiple plants.
    I would appreciate any insight regarding what is required to setp gATP to perform such checking and any other feedback regarding this issue - especially if you have had to implement something similar at your company.
    I have looked here but not much clear help:
    http://help.sap.com/saphelp_scm50/helpdata/en/26/c2d63b18bc7e7fe10000000a114084/frameset.htm
    Thanks,
    Niall

    Hi Niall
    you are probably looking at RBATP (Rule based ATP). Look at transaction /sapapo/rba04 in APO where you develop your own location and product substitution rules. Going down an ABAP road in R/3 may work short-term but not long-term as the requirements may get more complex.
    Regards
    Srinivas

  • Manually controlling music across multiple computers

    I've seen a lot of similar posts about this and I find it quite frustrating that I was able to manually control my music with my 30GB ipod across multiple computers but my iphone locks me down to just one computer.
    Even if I decide to use the 'manual' option it just doesn't let me add music to my iphone from multiple computers. I am constantly directed to erase and re-sync with the new computer. It's not a solution I like considering what I can do with my older ipod.
    The only solution I found so far was to jailbreak the iphone but I refuse to go that path. I read a few reasons Apple released as to why it was limited and find it very silly for lack of a better word.
    Anyone experience the same frustration and have a solution? Or know if there is an update to resolve this any time soon?
    Thanks!

    One thing that might help.
    The iphone will only sync music and video content from one computer at a time.
    If you want to sync music or video content from a second computer, itunes will remove what the first computer put on the iphone.
    One thing you can do though, is combine our itunes library onto one computer.
    This article might help out.
    http://support.apple.com/kb/HT1329?viewlocale=en_US

  • Filestream Partitioning across multiple drives

    I have a SQL 2008 R2 ENT database with the single [PRIMARY] filegroup, and a single FilestreamGroup.  The filestream has millions of records, cannot be restored, and is about to exceed the drive space limit.
    The table with the single filestream column has a primary key column that is also the Cluster index key.  There is a Full Text index and several foreign key constraints to this table's primary key. All must be disabled prior to dropping and rebuilding
    the index for partitioning (tried and tested).
    The filestream must be spread across multiple drive letters, and have multiple partitions on each drive, to facilitate file-restores within SLA.  Due to its size, it may exceed the weekend maintenance window, and therefore must be done ONLINE to allow
    the business to save new documents while the rebuild is in operation.
    How should I cobble this up into Filegroups / Files?  A data filegroup per drive. What is Best Practice for the filestream?

    I have never worked with it, but it seems very logical. If you create a partition that says that some data should be in other partition, the data has to be moved to that partition. And, yes, it has to remain in the old partition as well, in case you do a
    restore to point-in-time. This is no different than if you just delete a row.
    To get rid of the rows in the old partition, you need backup the transaction log, checkpoint and backup the log again, if memory serves.
    As for the font issue, the editor in the web UI stinks. That's one reason I stick to the NNTP bridge.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Spreading a single form across multiple pages

    I'd like to implement a single record form that spreads it's items across multiple pages for the purpose of grouping related information on each page such as "Contact Details"," Education Profile" etc. There are way too many columns for a single page.
    In Oracle forms you can create a form across multiple canvases or on different tabs of a tabbed canvas. Each item has a property to position it on which canvas.
    Is this possible in ApEx? I can see that transaction processing could be complex.
    How do I avoid, for example, inserting the record a second time if I commit from a different page?
    Any comments appreciated.
    Paul P

    Another way to do this without javascript and ajax that works pretty well is to setup a list that represents the logical "sections" and display it as a sidebar list.
    Create a hidden item on the page called PX_ACTIVE_SECTION and set the visibility of each region on the page based on the value of this item. For example: :PX_ACTIVE_SECTION = 'Contact Details'. You can also have multiple regions associated with a single tab. Set the default value to whatever section you wish to display first.
    Next, set each list item to be "current" when :PX_ACTIVE_SECTION is equal to that item ('Contact Details', 'Education Profile', etc.). Also set the URL destination of each item to: javascript:doSubmit('section name');
    Finally, add a branch back to the current page that sets PX_ACTIVE_SECTION to &REQUEST.. This traps the doSubmit call so you can set the hidden item. (Add a condition, if needed, to prevent this branch from executing due to other buttons and requests).
    The result is that the user can switch freely between sections and save after all data is entered. The page does refresh (since it doesn't use AJAX), but if your regions aren't too big, it should be reasonable.

  • Keyword search across multiple folders

    Is it possible to do keyword searches across multiple folders and drives in Bridge CS3? On disk I have organised my images by geographical location, but want to keyword them and search for them by theme. This will cut across several drives each with numerous folders and sub-folders. All the drives, folders and sub-folders are on the same pc, which is a stand alone machine and not networked.

    With Bridge it will search multiple folders on the same drive, but don't think you can search across multiple drives (I read you can with Lightroom).
    The search goes down the directory tree starting where you tell it. So therefore, start high enough to include all folders you want to look in.
    If you have Vista you could search across multiple drives if they are indexed. Advanced search allows multiple keywords (tags), and it is very fast, much faster than Bridge.

  • EtherChannel Across Multiple Slots

    The examples I have seen for EtherChannel always bundle multiple ports on the same slot.  It seems that bundleing ports across multiple slots would increase the resilency.  For example when linking two core switches together.  I assume this would allow the link to stay up if the card in a given slot failed.  Is this supported?  If so, are there any concerns?   

    Disclaimer
    The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.
    Liability Disclaimer
    In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.
    Posting
    It seems that bundleing ports across multiple slots would increase the resilency.  For example when linking two core switches together.  I assume this would allow the link to stay up if the card in a given slot failed.  Is this supported?  If so, are there any concerns?   
    Across slots is indeed done to increase resiliency.
    Concerns?  Yes, on some chassis, different cards may have different QoS architectures.  You may have to configure platform to ignore this.  Also, you can channel across different media (fiber and copper).
    At least on 6500 VSS, ingress traffic will use same 6500 egress link (this to avoid transiting VSL).

  • Singletons across multiple JVMs

    I have an EJB application container which is spread across multiple JVMs(something like a clustered EJB container). I am using a Singleton class to hand out 'running serial numbers'. Therefore there is a Singleton class per JVM(which is not very clean but i chose this path in the absence of anything better). Each singleton comes up when it is first invoked within a JVM and reads a starting running number and a pool of numbers from the database. This way multiple Singletons will not step on each other when doling out the serial numbers and there is no need to go back to the database for every request to the 'running serial number'.
    Is there some way to do some of the initialization within these Singleton's just ONCE across the multiple instances of the JVM ie. a real Singleton across multiple JVMs?
    Thanks
    Ramdas

    Pranav,
    thanks for your suggestions.
    the idea of using a Session bean to access a serialized object was one of my design options when i first implemented this, but then i decided against it because of the overhead of having to make an RMI call for every request to the 'running serial number'. The application being written is for a performance benchmark, and the request to the number being done at least once for every transaction, I chose to use Singletons.
    It is a clustered EJB container(not like most conatiners), in the sense that a single container is spread across multiple JVMs.
    Now if your application is getting clustured
    then you have a problem, in that what you can do is
    what object you have made put it in JNDI context and
    everytime you want to edit it pull it out edit and set
    it back. This will act a your singleton object in
    clustured environment accross JVM'sI did not fully understand the second part of your solution????
    Ramdas

  • Splitting across multiple files

    I am creating large, image-heavy documents, in the zone of 40+ pages with at least 10-11 embedded PDF files (equations from LaTexiT and images from Grapher, mostly) per page. The individual files are not large, but having so many seems to make opening the files a very slow process.
    I could save each of my sections as a separate file, as I usually only need one section at a time, but I'd be sad to lose things like self-updating page counts and page numbers. But is it possible to split large Pages documents across multiple files, at least for that sort of purpose? I've seen PDF files that do this (textbooks and other enormous documents).

    No, I'm afraid that is not possible. You could save them as individual files and then update the first page number in each document manually though.

  • CFTRANSACTION across multiple methods??

    I have a couple of question around CFTRANSACTION
    1) Can I use it around several component calls? eg
    <cftransaction>
    <cfinvoke component="myComponent"
    method="InsertTable1">
    <cfinvokeargument ........ />
    </cfinvoke>
    <cfinvoke component="myComponent"
    method="InsertTable2">
    <cfinvokeargument ........ />
    </cfinvoke>
    </cftransaction>
    2) If this is an inappropriate use of CFTRANSACTION, is there
    a way to programmatically achieve transactioning when database
    inserts/updates/deletes are performed across multiple component
    methods.

    In article <f4t1r9$akd$[email protected]>
    "Swampie"<[email protected]> wrote:
    > I have a couple of question around CFTRANSACTION
    >
    > 1) Can I use it around several component calls? eg
    Yes.
    > 2) If this is an inappropriate use of CFTRANSACTION
    No, this is appropriate.
    Just remember that you can't have a transaction spanning
    operations on
    multiple data sources and, prior to CF8, you cannot have
    nested
    transactions (hence Reactor and Transfer both have a boolean
    flag on
    save() methods to indicate you are wrapping the calls in your
    own
    cftransaction tag.
    Sean
    I'm trying a new usenet client for Mac, Nemo OS X.
    You can download it at
    http://www.malcom-mac.com/nemo

Maybe you are looking for

  • Action at the end of a MovieClip

    I've made a movieclip with an animation and I have exported it for AS. In the AS I make various entities of this movieclip and I add them in a array. At the end of my movieclip I want to execute a function (that should delete the movieclip from the a

  • Error 10401 when switching to a new DAQ Card

    Hello, I am running LabVIEW 7.1. Initially the computer had a PCI-6024E DAQ card installed with Traditional NI-DAQ 7.2 and NI-DAQmx 7.2. The card worked fine. Recently I replaced the 6024E with a PCI-6225 DAQ card and it was not recognized by MAX. I

  • ALV OUTPUT FIELD LEANTH

    HI EXPERTS, IN ALV OUTPUT FIELD LENTH IS MAX 128? IN MY REQUIRMENT THE FIELD HAS 500 CHARTERS Moderator Message: So where is the question? And why type in CAPITAL letters? Do some research before you post a question here. Also, please read the Rules

  • Java Type mapping of NUMBER columns

    I'm using JDBC Oracle driver 10.2.0.4, with java 1.5 and Oracle 10.2.0.4. Reading a field of NUMBER(6) JDBC driver map it to java.lang.Integer, if I extend this column to NUMBER(9) Jdbc map it to java.lang.Long. When can I find a detailed documentati

  • Adding a linked logo to Web Gallery?

    Is it possible to add that to the Web Gallery (it will return to the home page), along with the images in the thumbnails? I'm using CS1