Extracting payloads from Dehydration Store

Is it possible to write SQL against the dehydration store to extract the payloads being received into a process?
A quick look at the database appears to show they are stored as BLOB.
Pete

You might find some usefull info on this thread :
Re: Format of AUDIT_DETAILS.BIN?
Regard,
Rob

Similar Messages

  • Getting Payload from Dehydration Store

    Hi All,
    I was trying to get the Payload and display it to the users when there is an error in the process.
    Could someone please let me know what tables store the payload
    we dont have the following tables in our schema.
    dlv_message_bin and invoke_message_bin --- version 10.1.3.4
    thanks,
    Satish

    Hi
    You can use the BPEL API and retrieve the payload.The API has extensive methods using which we can almost all the information related to the instance.
    Fetching directly from the database might not work as the XML data is stored in RAW datatype in encrypted format.
    Thanks
    vamsi

  • How to get InputVariable payload from Dehydration

    I have a BPEL process, taking instanceID as input.Based on instanceID it has to bring inputVariable Payload(large xml data) from dehydration database.
    I have to provide this inputVariable payload to some other PartnerLink....
    I guess payload may store in XML_DOCUMENT , AUDIT_DETAILS tables.. thats why I read BLOB but I was failed to read....
    Is this correct way to get payload?
    Is there any other way to get payload XML?
    Please help
    its very urgent....
    Message was edited by:
    user594804
    Message was edited by:
    user594804

    yeah, xml_document is the right place for most of the time, but based on size/type of process sometime I find it somewhere else also.
    I was able to to read blob from xml_doc and marshall the object. I believe the easiest way would be to get primary key of xml_document by joining cube_isntance and other tables and then use getXMLDocument BPEL Client API to get input XML payload.
    HTH,
    Chintan

  • Not able to fetch the audit trail details from dehydration store.

    Hi
    While i am trying to fetch the audit trail details from the dehydration store using oracle soa 11g api, i am getting the below error.
    The complete stack trace is as below.
    javax.naming.NameNotFoundException: Unable to resolve 'FacadeFinderBean'. Resolved '' [Root exception is javax.naming.NameNotFoundException: Unable to resolve 'FacadeFinderBean'. Resolved '']; remaining name 'FacadeFinderBean'
         at weblogic.rjvm.ResponseImpl.unmarshalReturn(ResponseImpl.java:234)
         at weblogic.rmi.cluster.ClusterableRemoteRef.invoke(ClusterableRemoteRef.java:348)
         at weblogic.rmi.cluster.ClusterableRemoteRef.invoke(ClusterableRemoteRef.java:259)
         at weblogic.jndi.internal.ServerNamingNode_1033_WLStub.lookup(Unknown Source)
         at weblogic.jndi.internal.WLContextImpl.lookup(WLContextImpl.java:405)
         at weblogic.jndi.internal.WLContextImpl.lookup(WLContextImpl.java:393)
         at javax.naming.InitialContext.lookup(InitialContext.java:392)
         at oracle.soa.management.internal.ejb.EJBLocatorImpl.lookupBean(EJBLocatorImpl.java:738)
         at oracle.soa.management.internal.ejb.EJBLocatorImpl.lookupFinderBean(EJBLocatorImpl.java:716)
         at oracle.soa.management.internal.ejb.EJBLocatorImpl.<init>(EJBLocatorImpl.java:167)
         at oracle.soa.management.facade.LocatorFactory.createLocator(LocatorFactory.java:35)
         at com.test.GetPayload.getCompositeInstancePayload(GetPayload.java:65)
         at com.test.GetPayload.main(GetPayload.java:129)
    Caused by: javax.naming.NameNotFoundException: Unable to resolve 'FacadeFinderBean'. Resolved ''
         at weblogic.jndi.internal.BasicNamingNode.newNameNotFoundException(BasicNamingNode.java:1139)
         at weblogic.jndi.internal.BasicNamingNode.lookupHere(BasicNamingNode.java:252)
         at weblogic.jndi.internal.ServerNamingNode.lookupHere(ServerNamingNode.java:182)
         at weblogic.jndi.internal.BasicNamingNode.lookup(BasicNamingNode.java:206)
         at weblogic.jndi.internal.RootNamingNode_WLSkel.invoke(Unknown Source)
         at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:667)
         at weblogic.rmi.cluster.ClusterableServerRef.invoke(ClusterableServerRef.java:230)
         at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:522)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:146)
         at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:518)
         at weblogic.rmi.internal.wls.WLSExecuteRequest.run(WLSExecuteRequest.java:118)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)
    My code is as below
    Hashtable jndiProps = new Hashtable();
    jndiProps.put(Context.PROVIDER_URL, "t3://localhost:7001");
    jndiProps.put(Context.INITIAL_CONTEXT_FACTORY,
    "weblogic.jndi.WLInitialContextFactory");
    jndiProps.put(Context.SECURITY_PRINCIPAL, "weblogic");
    jndiProps.put(Context.SECURITY_CREDENTIALS, "welcome1");
    jndiProps.put("dedicated.connection", "true");
    Locator locator = LocatorFactory.createLocator(jndiProps);
    CompositeInstanceFilter filter = new CompositeInstanceFilter();
    filter.setECID(ecid); //Set the composite ECID
    filter.setId(compInstanceId); //Set the composite instance id
    Any Suggestion in this regard will be helpful.
    Thanks
    Abhijit
    Edited by: 945736 on Jul 11, 2012 4:20 PM
    Edited by: 945736 on Jul 11, 2012 4:20 PM

    If it just for a particular message then another simple solution is to open the message and go to details in pimon.
    http://wiki.scn.sap.com/wiki/display/PIS/AdapterMessageMonitoringVi+service+and+it%27s+methods

  • Extracting text from selected stories

    Platform : Windows
    Version: CS2
    Language: VB Script
    I run a weekly newspaper and each week, I want to repurpose approx 60pp
    to put on the web.
    The stories are unstructured and each page is in a single ID document.
    I thought that a headline and story text box could be manually selected and a script run to copy the contents to the clipboard, with a marker to split the headline from the following story.
    Equally, in the case of a picture, a path and caption could be captured.
    The data could then be handled by a VB programme, to create a file for each story.
    I know this seems quite laborious, but with shortcuts, the job will take about an hour.
    Is there a script somewhere that I could look at and modify to do something like this?
    Alternatively, is there a better way of repurposing unstructured documents for the web?
    TIA

    Hi John,
    Try the ExportAllStories.vbs example script--it'll export all of the stories in a document into a folder. That's probably not exactly what you want, but it'll give you a reasonable place to start.
    You can find it in the CS2 sample scripts archive, at:
    http://download.adobe.com/pub/adobe/indesign/InDesign_Example_Scripts.zip
    Thanks,
    Ole

  • Extract Payload from sxmb_moni

    Hi Experts
    The scenario is: XI pushes data into BI using ABAP proxy adapter. successfully passed data is seen in PSA. the uniquie key in data is journal ID.
    Requirement: i have to read the journal id in runtime when data passes through adapter, and create a file, with fields Journal ID, and success/failure... depending on whether it passes through the adapter successfully or not.
    In table SXMSPMAST, i get the status, but how can I read the payload? i am sure the payload for failed records are stored in some SAP internal table, but can someone tell me which table?
    This is very urgent.. and suitable reward points will be awarded to any help on this.
    Thanks in advance,
    Sushmita.

    Data can be seen in tables SXMSCLUP, SXMSPCLUR

  • Where the task payload is stored in the dehydration store

    Hi All,
    I wanted to get the taskPayload from the BPEL dehydration store. I have gone thru the WFTASK table but I didn't see any column which conains the taskpaylod.
    I need this payload to do the following..
    - Get the taskPayload
    - Modify it so that I can get modifications in the BPEL process when I submitted the task back to BPEL.
    Can the above be done from the bpel dehydration store? If yes please help me in finding the exact table for this task payload.
    If not guide me how can I access and modify the task payload.
    Thank you.

    Payloads are generally stored as BLOBS. Here is a definition of all the tables. You can make the payload editable withing the human task if required, so you don't need to go to the DB.
    Main tables used by the BPEL engine
    cube_instance – stores instance metadata, eg. instance creation date, current state, title, process identifier
    cube_scope – stores the scope data for an instance … all the variables declared in the bpel flow are stored here, as well as some internal objects to help route logic throughout the flow.
    work_item – stores activities created by an instance … all BPEL activities in a flow will have a work_item created for it. This work item row contains meta data for the activity … current state, label, expiration date (used by wait activities) … when the engine needs to be restarted and instances recovered, pending flows are resumed by inspecting their unfinished work items.
    document - stores large XML variables. If a variable gets to be larger than a specific size (configurable via the largeDocumentThreshold property via the domain configuration page) then the variable is stored in this table to alleviate loading/saving time from the cube_scope table.
    audit_trail - stores the audit trail for instances. The audit trail viewed from the console is modelled from an XML document. As the instance is worked on, each activity writes out events to the audit trail as XML which is compressed and stored in a raw column. Querying the audit trail via the API/console will join the raw columns together and uncompress the contents into a single XML document.
    audit_details - audit details can be logged via the api … by default activities such as assign log the variables as audit details (this behavior can be set via the auditLevel property on the domain configuration page). Details are separated from the audit trail because they tend to be very large in size … if the user wishes to view a detail they click a link from the audit trail page and load the detail separately. There is a threshold value for details too … if the size of a detail is larger than a specific value (see auditDetailThreshold) then it is place in this table, otherwise it is merged into the audit trail row.
    dlv_message – callback messages are stored here. All non-invocation messages are saved here upon receipt. The delivery layer will then attempt to correlate the message with the receiving instance. This table only stores the metadata for a message. (eg. current state, process identifier, receive date).
    dlv_message_bin – stores the payload of a callback message. The metadata of a callback message is kept in the dlv_message table, this table only stores the payload as a blob. This separation allows the metadata to change frequently without being impacted by the size of the payload (which is stored here and never modified).
    dlv_subscription – stores delivery subscriptions for an instance. Whenever an instance expects a message from a partner (eg. receive, onMessage) a subscription is written out for that specific receive activity. Once a delivery message is received the delivery layer attempts to correlate the message with the intended subscription.
    invoke_message – stores invocation messages, messages which will result in the creation of a instance. This table only stores the metadata for an invocation message (eg. current state, process identifier, receive date).
    invoke_message_bin – stores the payload of an invocation message. Serves the same purpose the dlv_message_bin table does for dlv_message.
    task – stores tasks created for an instance. The TaskManager process keeps its current state in this table. Upon calling invoking the TaskManager process, a task object is created, with a title, assignee, status, expiration date, etc… When updates are made to the TaskManager instance via the console the underlying task object in the db is changed.
    schema_md – (just added via patch delivered to Veerle) contains metadata about columns defined in the orabpel schema. Use case driving this feature was how to change the size of a custom_key column for a cube_instance row? Changing the db schema was simple but the engine code assumed a certain length and truncated values to match that length to avoid a db error being thrown. Now, column lengths are defined in this table instead of being specified in the code. To change a column length, change the column definition in the table, then change the value specified in this table, then restart the server.
    Column-by-column description:
    table ci_id_range
    - next_range (integer) – instance ids in the system are allocated on a block basis … once all the ids from a block have been allocated, another block is fetched, next_range specifies the start of the next block.
    table cube_instance
    - cikey (integer) – primary key … foreign key for other tables
    - domain_ref (smallint) – domain identifier is encoded as a integer to save space, can be resolved by joining with domain.domain_ref.
    - process_id (varchar) – process id
    - revision_tag (varchar) – revision tag
    - creation_date (date)
    - creator (varchar) – user who created instance … currently not used
    - modify_date (date) – date instance was last modified
    - modifier (varchar) – user who last modified instance … currently not used
    - state (integer) – current state of instance, see com.oracle.bpel.client.IInstanceConstants for values
    - priority (integer) – current instance priority (user specified, has no impact on engine)
    - title (varchar) – current instance title (user specified, no engine impact)
    - status (varchar) – current status (user specified)
    - stage (varchar) – current stage (user specified)
    - conversation_id (varchar) – extra identifier associated with instance, eg. if passed in via WS-Addressing or user specified custom key.
    - root_id (varchar) – the conversation id of the instance at the top of the invocation tree. Suppose A -> B -> C, root( B ) = A, root( C ) = A, parent( B ) = A, parent( C ) = B. This instance, instance at the top of the tree will not have this set.
    - parent_id (varchar) – the conversation id of the parent instance that created this instance, instance at the top of the tree will not have this set.
    - scope_revision (integer) – internal checksum of scope bytes … used to keep caches in sync
    - scope_csize (integer) – compressed size of instance scope in bytes
    - scope_usize (integer) – uncompressed size of instance scope in bytes
    - process_guid (varchar) – unique identifier for the process this instance belongs to … if changes need to be made for all instances of a process, this column is used to query (eg. stale process).
    - process_type (integer) – internal
    - metadata (varchar) – user specified
    table cube_scope
    - cikey (integer) – foreign key
    - domain_ref (integer) – domain identifier
    - modify_date (date) – date scope last modified
    - scope_bin (blob) – scope bytes
    table work_item
    - cikey (integer) – foreign key
    - node_id (varchar) – part of work item composite key, identifier for bpel activity that this work item created for
    - scope_id (varchar) – part of work item composite key, identifier for internal scope that this work item created for (note this is not the scope declared in bpel, the engine has an internal scope tree that it creates for each instance, bpel scopes will map to an internal scope but there will be other internal scopes that have no mapping to the bpel definition).
    - count_id (integer) – part of work item composite key, used to distinguish between work items created from same activity in the same scope.
    - domain_ref (integer) – domain identifier
    - creation_date (date)
    - creator (varchar) – user who created work item … currently not used
    - modify_date (date) – date work item was last modified
    - modifier (varchar) – user who last modified work item … currently not used
    - state (integer) – current state of work item, see com.oracle.bpel.client.IActivityConstants for values
    - transition (integer) – internal use, used by engine for routing logic
    - exception (integer) – no longer used
    - exp_date (date) – expiration date for this work item; wait, onAlarm activities are implemented as expiration timers.
    - exp_flag (integer) – set if a work item has been called back by the expiration agent (ie. expired).
    - priority (integer) – priority of work item, user specified, no engine impact
    - label (varchar) – current label (user specified, no engine impact)
    - custom_id (varchar) – custom identifier (user specified, no engine impact)
    - comments (varchar) – comment field (user specified, no engine impact)
    - reference_id (varchar) -
    - idempotent_flag (integer) – internal use
    - process_guid (varchar) – unique identifier for the process this work item belongs to … if changes need to be made for all instances of a process, this column is used to query (eg. stale process).
    table document
    - dockey (varchar) – primary key for document
    - cikey (integer) – foreign key
    - domain_ref (integer) – domain identifier
    - classname (varchar) – no longer used
    - bin_csize (integer) – compressed size of document in bytes
    - bin_usize (integer) – uncompressed size of document in bytes
    - bin (blob) – document bytes
    - modify_date (date) – date document was last modified
    table audit_trail
    - cikey (integer) – foreign key
    - domain_ref – domain identifier
    - count_id (integer) – many audit trail entries may be made for each instance, this column is incremented for each entry per instance.
    - block (integer) – when the instance is dehydrated, the batched audit trail entries up to that point are written out … this block ties together all rows written out at one time.
    - block_csize (integer) – compressed size of block in bytes
    - block_usize (integer) – uncompressed size of block in bytes
    - log (raw) – block bytes
    table audit_details
    - cikey (integer) – foreign key
    - domain_ref (integer) – domain identifier
    - detail_id (integer) – part of composite key, means of identifying particular detail from the audit trail
    - bin_csize (integer) – compressed size of detail in bytes
    - bin_usize (integer) – uncompressed size of detail in bytes
    - bin (blob) – detail bytes
    table dlv_message
    - conv_id (varchar) – conversation id (correlation id) for the message…this value is used to correlate the message to the subscription.
    - conv_type (integer) – internal use
    - message_guid (varchar) – unique identifier for the message…each message received by the engine is tagged with a message guid.
    - domain_ref (integer) – domain identifier
    - process_id (varchar) – identifier for process to deliver the message to
    - revision_tag (varchar) – identifier for process revision
    - operation_name (varchar) – operation name for callback port.
    - receive_date (date) – date message was received by engine
    - state (integer) – current state of message … see com.oracle.bpel.client.IDeliveryConstants for values
    - res_process_guid (varchar) – after the matching subscription is found, the process guid for the subscription is written out here. – res_subscriber (varchar) – identifier for matching subscription once found.
    table dlv_message_bin
    - message_guid (varchar) – unique identifier for message
    - domain_ref (integer) – domain identifier
    - bin_csize (integer) – compressed size of delivery message payload in bytes
    - bin_usize (integer) – uncompressed size of delivery message payload in bytes
    - bin (blob) – delivery message payload
    table dlv_subscription
    - conv_id (varchar) – conversation id for subscription, used to help correlate received delivery messages.
    - conv_type (integer) – internal use
    - cikey (integer) – foreign key
    - domain_ref (integer) – domain identifier
    - process_id (varchar) – process identifier for instance
    - revision_tag (varchar) – revision tag for process
    - process_guid (varchar) – guid for process this subscription belongs to
    - operation_name (varchar) – operation name for subscription (receive, onMessage operation name).
    - subscriber_id (varchar) – the work item composite key that this subscription is positioned at (ie. the key for the receive, onMessage work item).
    - service_name (varchar) – internal use
    - subscription_date (date) – date subscription was created
    - state (integer) – current state of subscription … see com.oracle.bpel.client.IDeliveryConstants for values
    - properties (varchar) – additional property settings for subscription
    table invoke_message
    - conv_id (varchar) – conversation id for message, passed into system so callbacks can correlate properly.
    - message_guid (varchar) – unique identifier for message, generated when invocation message is received by engine.
    - domain_ref (integer) – domain identifier
    - process_id (varchar) – identifier for process to deliver the message to
    - revision_tag (varchar) – revision tag for process
    - operation_name (varchar) – operation name for receive activity
    - receive_date (date) – date invocation message was received by engine
    - state – current state of invocation message, see com.oracle.bpel.client.IDeliveryConstants for values
    - priority (integer) – priority for invocation message, this value will be used by the engine dispatching layer to rank messages according to importance … lower values mean higher priority … messages with higher priority are dispatched to threads faster than messages with lower values.
    - properties (varchar) – additional property settings for message
    table invoke_message_bin
    - message_guid (varchar) – unique identifier for message
    - domain_ref (integer) – domain identifier
    - bin_csize (integer) – compressed size of invocation message payload in bytes
    - bin_usize (integer) – uncompressed size of invocation message payload in bytes
    - bin (blob) – invocation message bytes
    table task
    - domain_ref (integer) – domain identifier
    - conversation_id (varchar) – conversation id for task instance … allows task instance to callback to client
    - title (varchar) – current title for task, user specified
    - creation_date (date) – date task was created
    - creator (varchar) – user who created task
    - modify_date (date) – date task was last modified
    - modifier (varchar) – user who last modified task
    - assignee (varchar) – current assignee of task, user specified, no engine impact
    - status (varchar) – current status, user specified, no engine impact
    - expired (integer) – flag is set if task has expired
    - exp_date (date) – expiration date for task, expiration actually takes place on work item in TaskManaged instance, upon expiration task row is updated
    - priority (integer) – current task priority, user specified, no engine impact
    - template (varchar) – not used
    - custom_key (varchar) – user specified custom key
    - conclusion (varchar) – user specified conclusion, no engine impact

  • How to extract data from xml file and store that data inti data base table

    Hii All
    I have one table that table contains one column that column contain an XML file
    I want to extract data from that XML file and want to store that extracted data into an other table.
    That xml file has different different values
    I want to store that values into table in diff diff columns

    Hi,
    I am also facing the same problem.I have a .xml file and i need to import the data into the custom table in oracle database.
    Can you please let me know if you know the solution how to do it in oracle apps.
    Thanks,

  • Error extracting mzm.tvbnoepi.pkg while installing Xcode from App Store on OS X Loin

    Hi Folks. First post here.
    I am trying to install Xcode from App Store on Mac OS X Loin and there is this error which I am facing.
    Error extracting mzm.tvbnoepi.pkg
    This is persistant. I have reinstalled entire OS and then I tried to install XCode Again. Then realised
    something is wrong with XCode binary provided by App Store. Any solutions for this?

    I have downloaded xcode from https://developer.apple.com/downloads/index.action and installed it. this fixed the issue.

  • Extracting json payload from the incoming request

    Hi,
    I am looking for ways on how to extract the application/json payload in my web resource methods when developing applications using jersey. I know that there are multiple annotations like @PathParam, @FormParam and others to get the data sent using other http methods. Im looking for an example on how to get the payload from the http post request?
    Can someone help?
    webservices are built using jersey.
    Thanks

    http://jersey.java.net/nonav/documentation/latest/user-guide.html
    If it isn't in the user guide... well then it is either not documented or it doesn't exist.

  • Getting the payload from a faulted process

    I'm using the fault framework and I need to know how to get the payload from the faulted process. I have a custom java logger that uses the locator API, but I'm finding out that this will only work if the process has been persisted to the dehydration store. Is there another way to get the payload? I know I can use checkpoint/wait within the bpel process to force dehydration but I'm looking for another alternative

    The problem is the cube instance table has not been populated yet for the instance. So from a faulted instance that has NOT been dehydrated to cube instance, how will I get to the invoke_message table, or how will I get the payload from the faulted instance.
    I'm able to get the document fine when the cube_instance table is populated, my problem is sometimes the instance has not been dehydrated and cube_instance is not available. So from a faulted instance how will I get to the invoke_message table without using the cube_instance table

  • Reading an invocation payload from BPEL

    Hi,
    I was wondering if anyone had successfully managed to read an XML payload from the BPEL dehydration store?
    I have a requirement to replay 4hrs of messages from BPEL in the event of a catastrophic failure on a subscribing system. I can't find any documentation on any out of the box functionality to do this.
    I know the payload of the messages are stored in the dehydration store in the table XML_DOCUMENT as BLOBs but I can't seem to read the data as a string.
    Im running the following query
    SELECT utl_raw.cast_to_varchar2(dbms_lob.substr(bin,5000,1)) FROM xml_document
    Does anybody have any ideas? or has anyone done anything similar?

    Hi,
    Got this from a colleague:
    create table instance_log
    cikey integer,
    detail_id integer,
    xml xmltype
    declare
    l_clob clob;
    l_dest_offset integer;
    l_src_offset integer;
    l_lang_context integer;
    l_warning integer;
    begin
    for r_ins in (select cikey,
    detail_id,
    utl_compress.lz_uncompress(bin) bin
    from audit_details
    where bin is not null
    loop
    l_dest_offset := 1;
    l_src_offset := 1;
    l_lang_context := dbms_lob.default_lang_ctx;
    dbms_lob.createtemporary(l_clob,true);
    dbms_lob.converttoclob
    ( dest_lob => l_clob,
    src_blob => r_ins.bin,
    amount => dbms_lob.getlength(r_ins.bin),
    dest_offset => l_dest_offset,
    src_offset => l_src_offset,
    blob_csid => 1,
    lang_context => l_lang_context,
    warning => l_warning
    insert into instance_log
    cikey,
    detail_id,
    xml
    values
    r_ins.cikey,
    r_ins.detail_id,
    xmltype(l_clob)
    end loop;
    commit;
    end;
    Apparently the row: utl_compress.lz_uncompress(bin) bin does do the trick.
    Keep in mind though that querying on this value (searching for instances based on data in this column) will result in full-table-scans.
    Regards,
    Martien

  • Unrestricted sharing of free* items from iTunes Store

    We are trying to find a legal but convenient way of managing our family iTunes purchases and sharing.
    We are just beginning to use iTunes on a couple of new Macs, three iPads or iPod Touch devices and two old PC's.
    We have a home Wi-Fi network with a network drive, where we can store items to be shared.
    The children in a few years will move out and will take their music, e-books, etc. with them.
    Our Internet connection is slow we would like to avoid downloading the same free item from the iTunes Store to libraries associated with different Apple ID's.
    We want everybody to eventually have his/her own copy of our home videos and voice memos recorded on portable devices, which can be further shared without any restrictions.
    I understand that some of the rules that Apple set for iTunes users are as follows.
    1. Every item purchased from iTunes Store is marked with the ownership information i.e. information about of the account it was purchased from. (The account is identified at any given time by a unique Apple ID.)
    2. The ownership information on an item can not be changed, i.e. I cannot give an item to somebody else, declining to use it any more, and transferring the item to that person's iTunes library associated with that person's iTunes account.
    Question 1: Are the statements 1 and 2 above correct and precise?
    Question 2: Does the same apply to free items (iTunes U, e-books, iPod/iPad apps) from the iTunes store? (it is painful to download the same free item to different accounts over a slow Internet connection.)
    Question 3. Does the same apply to music imported into iTunes from a CD?
    Question 4: Is it true that in order to allow the kids to take their items with them when they move out, one has to make sure that the items
    have been purchased from iTunes accounts associated with their own Apple ID's and placed in their separate iTunes libraries?
    Question 5: If I add our home videos and voice memos to my iTunes Library associated with my iTunes account, and do not retain other copies, will it be possible to copy such items to libraries associated with other Apple ID's without any restrictions? Will it be possible to extract such items from iTunes?
    Question 6: As long as we are using the same Wi-Fi, can we share music, video,  ebooks from libraries associated with different Apple ID's? I believe so, but please confirm. What are the restrictions? Is it helpful to place an iTunes library on a network drive?
    Question 7: I believe the same iTunes library can contain items purchased through different iTunes accounts; also there can be multiple iTunes libraries associated with the same iTunes account. Are such options useful and would they help in managing family purchases and sharing?
    Question 8. I noticed that a portable device can sync (download) items from only one library. Different persons have apps and music from different libraries associated with their own Apple ID's. We would like them to have also on their portable devices the same free e-books and home videos or voice memos. Do we have to copy such free e-books and voice memos to all the different libraries?
    These must be questions many familes ask, but I could not find a precise and comprehensive answer despite browsing the web for several days.
    Could you please help?

    No. It said nothing about authorizing a computer. I only have one I-Pod and one computer with my library on it. I have listened to these 4 songs for over one year on my p/c and I-Pod. This is a new one to me. These 4 songs still show up in my "purchased" section of my I-tunes but I cannot play them and they will not
    copy to my I-Pod. I see where I can repurchase these songs at the I-tunes store, but don't want to pay for them twice. What do you think is going on? Thanks.

  • My itunes account was disabled for some reason.  I changed password in iforgot and still didn't work.  I could log in but not make purchases from the store so I set up a new login/account .  Is there any way to move my music to the new account?

    My itunes account was disabled for some reason.  I changed password in iforgot and still didn't work.  I could log in and see my music but not make purchases from the store or even redeem an itunes gift card so I set up a new login/account with another email account of mine. On the new account I can redeem my gift card and download items onto my ipad2.  Is there any way to move my music to the new account?

    Launch the Console application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Console in the icon grid.
    Make sure the title of the Console window is All Messages. If it isn't, select All Messages from the SYSTEM LOG QUERIES menu on the left. If you don't see that menu, select
    View ▹ Show Log List
    from the menu bar.
    Click the Clear Display icon in the toolbar. Then try the action that you're having trouble with again. Select any messages that appear in the Console window. Copy them to the Clipboard by pressing the key combination command-C. Paste into a reply to this message (command-V).
    When posting a log extract, be selective. In most cases, a few dozen lines are more than enough.
    Please do not indiscriminately dump thousands of lines from the log into this discussion.
    Important: Some private information, such as your name, may appear in the log. Anonymize before posting.

  • How to extract data from Chart History?

    Dear all, I have read a lot of posts, but still don't understand how to extract data from Chart history.
    Suppose you acquired 1024 points of data every time, then use "build array" to build an array, then send to intensity chart to show, then save the history into a .txt file,  but how to extact data from the file?
    How Labview store the data in the 2D history buffer?
    Anybody has any examples?

    The simplest would be to save the 2D array as a spreadsheet file, the read it back the same way.
    Maybe the attached simple example can give you some ideas (LabVIEW 7.1). Just run it. At any time, press "write to file". At any later time, you can read the save data into the second history chart.
    If you are worried about performance, it might be better to use binary files, but it will be a little more complicated. See how far you get.
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    IntensityChartHistorySave.vi ‏79 KB

Maybe you are looking for