Regarding multi sources to single target

Hi all,
In which case( business need) we will go for multi messages to a single one(n:1) mapping.
so far I have encountered 1:n but not vice versa.
Please explain business need and give some blogs to practice.
Thanks

Hi,
Imagine you receive an order, call an RFC to do a lookup for a partner number and send the order to a receiver system with the partner number obtained from the RFC.
You will create an integration process that first receives the order. It then calls the RFC and the output from the RFC is stored in the container.
In order to create the output order, data from the first order is needed, as well as the output from the RFC. In order to do this, you call a mapping that takes two input documents (the original order and the response from the RFC) and map those to the output order.
This blog gives an example of another multi-mapping scenario: <a href="/people/pooja.pandey/blog/2005/07/27/idocs-multiple-types-collection-in-bpm">IDOCs (Multiple Types) Collection in BPM</a>
Kind regards,
Koen

Similar Messages

  • Transform data from multiple sources to single target in bpel 10.13.1.0

    Hi,
    I need to transform data from mulitple sources to single target schema.
    Please any one can give some idea.
    Regards
    janardhan

    We are gone a other way merge multiple information into one target.
    First we have created a wrapper variable with the structure of the target variable that we want use in the transformation.
    With a assign element we filled the wrapper variable with information from the inputvariable of the bpel process and some addtional information.
    Then we called in the transformation the wrapper variable as source and the target variable as target.
    We have used that way in some bpel process and it works for us.
    Within Oracle SOA Suite 11g, there is the mediator component that does simply routing
    and transformation functionality within a composite (SCA).
    That's all, I hope that help.
    Michael

  • Open Hub - Data from multiple sources to single target

    Hello,
    I am using Open Hub to send the data from SAP BI to Flat files. I have a requirement where I want to create a single destination from multiple sources. In other words in BI we have different tables for attributes and text. I would like to combine the data from attributes  and text into a single file. For eg. I want to have material attributes and text in the same single file as output.
    Is this possible in Open Hub? If yes could you please help me to understand the process.
    Thanks,
    KK

    Hi,
    1. Create the Info Spoke and activate it
    2. Change it and go to transformation
    3. Check the box InfoSpoke with Transf. Using BAdI
    4. You are asked if you want to generate the spoke. Say yes & Simply set some texts and activate here, then return.
    5. You can now change the target structure. Simply create a Z structure with all the attributs & text field in it in the SE11 and enter it here.
    6. Double click on BAdI implementation & then double click again on "TRANSFORM" method of the implementation. It will take you to method  
    "IF_EX_OPENHUB_TRANSFORM~TRANSFORM"
    7. Write a code to select & fill the text field & map other filed with the attribute fields.
    Example:
    ZEMPLOYEE_ATTR_STRU - Target Structure for InfoSpoke EMPLOYEE_ATTR
    EMPLOYEE     /BI0/OIEMPLOYEE     NUMC     8     0     Employee
    DATETO     /BI0/OIDATETO     DATS     8     0     Valid to
    DATEFROM     /BI0/OIDATEFROM     DATS     8     0     Valid from
    COMP_CODE     /BI0/OICOMP_CODE     CHAR     4     0     Company code
    CO_MST_AR     /BI0/OICO_MST_AR     CHAR     4     0     Controlling Area of Master Cost Center
    HRPOSITION     /BI0/OIHRPOSITION     NUMC     8     0     Position
    MAST_CCTR     /BI0/OIMAST_CCTR     CHAR     10     0     Master Cost Center
    TXTMD     RSTXTMD     CHAR     40     0     Medium description
    Note: Text and attribute are in same structure.

  • Multiple source and single target

    Hi,
    I have 3 users in informatica and i wanted to find out tablespace size in gb and mb
    SELECT
         '$$TABLESPACE_NAME',
         SYSDATE,
    SUM(BYTES) / 1024 / 1024 / 1024 size_in_gb,
         SUM(BYTES) / 1024 / 1024 size_in_mb     
    FROM user_segments
    WHERE tablespace_name like '%$$TABLESPACE_NAME%'
    i confugred this wf and it has 3 session task and each session task confugured with one user but in wf connection details i can able to add only 1 parameter i.e. dbconnection_oltlp and dbconnecion_olap
    so what is happening it is reading size for one user only i.e. for dbconnection_oltlp and dbconnecion_olap.what about remaining users.i.e. remaining session task is getting failed
    i.e. dbConneciton_olap2
    thanks
    Chinu

    Hi
    I have 3 different session task in one workflow and each of them have different source now i want the parameters should generate automatically in paramaeters tab of DAC
    because when i am adding above query in informatica it is reading only one connection which confugured in task i.e. is not reading remaing 2 sources
    i.e. dbconnection_oltp and dbconnection_olap---session--1--reading it because confugred in task confugrtion of DAC)
    dbconnection_oltp1 and dbconnection_olap1---session-2(different source and target-not hitting to this connection)
    etc
    thanks
    chinu
    Edited by: Chinu on Aug 4, 2011 9:05 PM
    Edited by: Chinu on Aug 4, 2011 9:08 PM

  • Multisource to single target xslt

    i heard we can do multi source to single target xslt in 11g but how

    here you go................
    http://blogs.oracle.com/soa_how_to/2010/04/how_to_implement_multi-source_xslt_mapping_in_11g_bpel.html

  • Mapping 2 source structures to a single target database

    Hi Experts, My scenario is a Proxy to JDBC in which I need to send data from 2 source structures to 1 target table.
    Source structures :                                           Target:
    ABC                                                                 XYZ
      row1                                                                 InsertStatement
        Item1                                                                  DBTable
                                                                        Action
       (Fields)                                                     Table
                                                                                    acess
    DEF                                                                              -
      row2                                                                          -
       item2                                                                         -
      (Fields)
    (Fields)
    Please let me know how to map the 2 source nodes to the target access node to transfer data from the source tables to target database.
    Regards,
    Krishna

    You should make use of BPM to collect the two source structures.
    One of the BPM examples mentioned in IR --> SAP BASIS ---> SystemPatterns can be referred for this purpose.
    Once the messages are collected, perform a 2:1 mapping (two Proxies as source message and one JDBC message as target).
    If you want the mapping logic then please provide a proper format of both your Source and Target messages and the expected mapping, so that someone from SDN can help you out.
    The target structure provided is a bit confusing.
    Are you going to receive two different proxy messages (i.e. two different calls) or just one Proxy call and there you will have two different nodes within the same message??? ...... me confused
    Regards,
    Abhishek.
    Edited by: abhishek salvi on Sep 25, 2009 11:51 AM

  • Load into single target table frm multiple source table in single interfac

    Hi
    I have four source table and a single target table.
    I need to move data from either of these tables into a target table , and we have to decide the source table based on user input.
    Example :
    Lets say there are four tables A,B,C,D and one target table T.
    If user input says A
    then the data from table A will move to table T
    And again , if the user says table C then data from table C will move to table T.
    And we have to create only one interface for achieving this in oracle Data Integrator ( ODI ).
    You can take assumptions in source and target table.

    Hi ,
    In ODI 11g , there are new feature callled dataset. It allows to use UNION , MINUS etc.
    Google it , you will get many tutorials on Dataset. check the link
    http://www.rittmanmead.com/2011/06/odi-11g-new-mapping-and-interface-features-part-1/
    In your case , you can provide filter conditions on your tables i.e.
    Say My target table is EMPLOYEE , My source tables are EMPLOYEE and DEPARTMENT
    INSERT INTO EMPLOYEE(CUSTOMER_ID , CUSTOMER_NAME) SELECT CUSTOMER_ID , CUSTOMER_NAME from employee where 'EMPLOYEE' = :EMP
    UNION DEPARTMENT_ID , DEPARTMENT_NAME from departments where 'DEPARTMENT' = :EMP ;
    Just pasted the Screenshots on following page : http://oracoholic.blogspot.in/ . Have a look
    Edited by: user8427112 on Jan 8, 2013 11:04 AM

  • Restore single datafile from source database to target database.

    Here's my issue:
    Database Release : 11.2.0.3 across both the source and targets. (ARCHIVELOG mode)
    O/S: RHEL 5 (Tikanga)
    Database Storage: Using ASM on a stand-alone server (NOT RAC)
    Using Oracle GG to replicate changes on the Source to the Targets.
    My scenario:
    We utilize sequences to keep the primary key in tact and these are replicated utilizing GG. All of my schema tables are located in one tablespace and datafile and all of my indexes are in seperate tablespace (nothing is being partitioned).
    In the event of media failure on the Target or my target schema being completely out of whack, is there a method where I can copy the datafile/tablespace from my source (which is intact) to my target?
    I know there are possibilites of
    1) restore/recover the tablespace to a SCN or timestamp in the past and then I could use GoldenGate to run the transactions in (but this could take time depending on how far back I need to recover the tablespace and how many transactions have processed with GG) (This is not fool-proof).
    2) Could use DataPump to move the data from the Source schema to the Target schema (but the sequences are usually out of order if they haven't fired on the source, get that 'sequence is defined for this session message'). I've tried this scenario.
    3) I could alter the sequences to get them to proper number using the start and increment by feature (again this could take time depending on how many sequences are out of order).
    I would think you could
    1) back up the datafile/tablespace on the source,
    2)then copy the datafile to the target.
    3) startup mount;
    4) Newname the new file copied from the source (this is ASM)
    5) Restore the datafile/tablespace
    6) Recover the datafile/tablespace
    7) alter database open;
    Question 1: Do I need to also copy the backup piece from the source when I execute the backup tablespace on the source as indicated in my step 1?
    Question 2: Do I need to include "plus archivelog" when I execute the backup tablespace on the source as indicated in my step 1?
    Question 3: Do I need to execute an 'alter system switch logfile' on the Target when the recover in step 6 is completed?
    My scenario sounds like a Cold Backup but running with Archivelog mode, so the source could be online while the database is running.
    Just looking for alternate methods of recovery.
    Thanks,
    Jason

    Let me take another stab at sticking a fork into this myth about separating tables and indexes.
    Let's assume you have a production Oracle database environment with multiple users making multiple requests and the exact same time. This assumption mirrors reality everywhere except in a classroom where a student is running a simple demo.
    Let's further assume that the system looks anything like a real Oracle database system where the operating system has caching, the SAN has caching, and the blocks you are trying to read are split between memory and disk.
    Now you want to do some simple piece of work and assume there is an index on the ename column...
    SELECT * FROM emp WHERE ename = 'KING';The myth is that Oracle is going to, in parallel, read the index and read the table segments better, faster, whatever, if they are in separate physical files mapped by separate logical tablespaces somehow to separate physical spindles.
    Apply some synapses to this myth and it falls apart.
    You issue your SQL statement and Oracle does what? It looks for those index blocks where? In memory. If it finds them it never goes to disk. If it does not it goes to disk.
    While all this is happening the hundreds or thousands of other users on the database are also making requests. Oracle is not going to stop doing work while it tries to find your index blocks.
    Now it finds the index block and decides to use the ROWID value to read the block containing the row with KING's data. Did it freeze the system? Did it lock out everyone else while it did this? Of course not. It puts your read request into the queue and, again, first checks memory to see if it needs to go to disk.
    Where in here is there anything that indicates an advantage to having separate physical files?
    And even if there was some theoretical reason why separate files might be better ... are they separate in the SAN's cache? No. Are they definitely located on separate stripes or separate physical disks? Of course not.
    Oracle uses logical mappings (tables and tablespaces) and SANS use logical mappings so you, the DBA or developer, have no clue as to where anything physically is located.
    PS: Ouija Boards don't work either.

  • How to implement multi-source XSLT mapping in 11g PS3 BPEL  ?

    Hi
    How to implement multi-source (single destination) XSLT mapping in 11g PS3 BPEL ? Is there any good example step by step ?
    thx
    d

    Hi d,
    Also there's a sample available at samplecode.oracle.com mapper-105-multiple-sources.zip.
    Regards,
    Neeraj Sehgal

  • Designing the multi-source universe taking long time

    Hi
    I'm facing problem with IDT, when designing the multi-source universe time. It's so slow when I tried to click on dimension to change description in Business Layer.Universe backend is Oracle and SQL DB.
    How should I do to increase IDT performance at design time?
    Version IDT: 4.1 Support Pack 2

    Hi Sreeni,
    Do we have a single Adaptive Processing Server containing all services?
    How is the APS split and sized?
    Regards,
    Manpreet

  • The quest for a good multi-source video workflow

    I frequently have to mix together several sources, and the Multi-Camera Source Sequence in Premiere Pro should in theory be the best way to go.
    However, it's seriously underdeveloped and for a single camera with separate audio, I end up doing a sequence instead.
    Mix and scrub audio in Audition, tag and comment source video in Prelude, import both to Premiere Pro and put them together on a sequence so I can use that sewuence as a source for other sequences.
    Why don't I use a multi-camera sequence? Because you have to tag syncronization points before you put everything together, because audio and video is often recorded out of sync, with several video recordings per audio recording etcetera. It ends up a mess. We don't live in an ideal world, and not wanting to waste time (and make the talent lose focus) organizing things, recording happens pretty much on the fly with all that includes in differing methods, ideals and so on. We need to know how to cut resources (and this includes not using the most exåensive editing tools) but there's always room for improvements.
    I like to slap everything loosely together post-production and fine-sync on audio waves on the run, but this means you can adjust the multi-camera source sequence after you have put it together.
    This isn't possible in Premiere Pro, at least not without putting it into a sequrnce first and that's the next step you want to do after all the syncronizing is done, not something you want to while cutting up the sources in a sequence.
    I believe most of what I need for this is already programmed but it's mostly a matter of politics. Just please understand, synchronizing and preparing sources is different from working with a sequence, but it can be a substantial piece of work on it's own and it's not enough to just put it together in a menu and think that's all fine. (And not by far at that.)
    What I need is to be able to edit a multi-camera source sewuence on a timeline like I would edit any other sequence. And call it a multi-source sequence, it isn't just video.
    Then audio from different sources can be mixed together on the timeline like you do in a sequence, expoerted to audition as a multi-track session for more advanced audio processing (and no, single tracks aren't enough, I often need the real-time effects in audition not just the processing of whole tracks. And I mix together several audio sources, both mono and stereo tracks some from cameras but mostly from a separate, 24-bit audio recorder with inputs from hand or tie mics, environment and perhaps a gun with a boom. (I'm the audio guy with perhaps a camera on the side, others handle the main camera(s).
    And video from several cameras can be put together, one track per camera on a timeline and not just one clip per track, one camera can have several clips taken on a timeline.
    Then, when all this is put together it can be used as a source, like we use a multi-camera sequence today.
    I believe it's possible, and would increase the value of your offers. If this isn't done, I may end up looking for other software.

    Hi gino_76ph,
    You can compare different Nokia phone models and their features on Nokia developer website. Just follow the link below:
    http://www.developer.nokia.com/Devices/Device_specifications/?filter1=all
    Audio player features can be found under multimedia section, with differences highlighted.
    Hope this helps,
    Puigchild
    If you find this post helpful, a click upon the white star at bottom would always be appreciated.
    If it also solves your problem, clicking ACCEPT AS SOLUTION below it will benefit other users!

  • Not able to access the multi-source universe in WebI

    Hi
    I am not able to access the multi-source universe in WebI, getting below error message.
    [Data Federator Driver] Unexpected exception: com.crystaldecisions.thirdparty.org.omg.CORBA.UNKNOWN: null | [Data Federator Driver] Failed to connect to any of the provided 'Central Management Server' hosts.
    And also Not able to perform anything to designing multi-source universe in business Layer.
    Universe back-end is
    Oracle 11g and
    SQL2008 DB
    Version IDT: 4.1 Support Pack 2
    SAP BusinessObjects BI Platform 4.1 Support Pack 2,

    Hi Sreeni,
    You can create a new APS in CMC containing the Data Federation Service with
    -Xmx2g -> 8g (This is the suggested range)
    Make sure you remove this service from the existing APS and then create a new one.
    You could refer SAP KBA 1694041 - BI 4.x Consulting:- How to size the Adaptive Processing
    Server (APS) which would assist you in sizing the APS.
    Regards,
    Manpreet

  • In XI Mapping multiple fields mapping to single target field.

    Hi Friends,
    In XI Mapping multiple fields mapping to single target field.
    For example my requirement is :
    Source Fields:(This RFC BAPI Structure)
    Empno                0-1
    EmpName           0-1
    Address             0-1
    Taget Field is:
    Details               0-1
    The above three fields passed to the Details Field. Here i am using Concat function
    But i have one query on that on,Every field having "line Break" required.
    Can you please help me out above this requirement.
    Thanks in Advance,
    Sateesh N.

    If you want a line break between the three fields, then try
    passing a,b,c to the udf and in the udf you would have
    return a+"\n"+b+"\n"+c;

  • Multi Source execution plan build issue

    Hi,
    I am trying to create/build a multi source (homogenous) execution plan in DAC from 2 containers for 2 of the same subject areas (Financial Payables) from 2 different EBS sources. I successfully built each individual subject area in each container and ran an executin plan for each individually and it works fine. Now I am trying to create 1 execution plan based on the DAC steps. So far I have done the following:
    - Configured all items for both subject areas in the DAC (tables, tasks, task groups, indexesconnections, informatica logical, physical folders, etc)
    - Assigned EBS system A a sifferent priority than EBS system B under Setup->Physical Data Sources
    - I noticed that I have to change the Physical Folder priorities for each informatica folder (SDE for Container A versus SDE for Container B). I assigned system A a higher priority
    - Build the Execution plan..
    I am able to build the execution plan successfully but I have the following issues/questions:
    1) I assumed that by doing the steps above..it will ONLY execute the extract (SDE) for BOTH system containers but only have ONE Load (SIL and PLP)..and I do see that the SDEs are ok..but I see the SIL for Inder Row in RunTABLE running for BOTH...why is this? When I run the EP...I get an unique constraint index error..since its entering two records for each system. Isnt the DAC suppose to include only one instance of this task?
    2) When I build the execution PLAN, it is including the SILOS and PLP tasks from BOTH source system containers (SILOS and PLP folders exist in both containers)...why is this? I thought that there is only one set of LOAD tasks and only SDE are run for each (as this is a homogenous load).
    3) What exactly does the Physical folder Priority do? How is this different than the Source System priority? When I have a multi source execution plan, do I need to assign physical folder priorites to just the SDE folders?
    4) When we run a multi source execution plan, after the first full load, can we somehow allow Incremental loads only from Container A subject area? Basically, I dont want to load incrementally from source sytem A after the first full load.
    4) Do I have to set a DELAY? In my case, my systems are both in the same time zone..so I assume I can leave this DELAY option blank. Is that correct?
    Thanks in advance
    Edited by: 848613 on May 26, 2011 7:32 AM
    Edited by: 848613 on May 26, 2011 12:24 PM

    Hi
    you are having 2 sources like Ora11510 and OraR1211 so you will be having 2 DAC containers
    You need these below mandatory changes
    for your issue
    +++++++++++++++++++++++++++++++++
    Message: Database errors occurred:
    ORA-00001: unique constraint (XXDBD_OBAW.W_ETL_RUN_S_U2) violated while inserting into W_ETL_RUN_S
    You need to Inactivate 2 tasks in R12 container.
    #1 Load Row into Run Table
    #2 Update Row into Run Table
    +++++++++++++++++++++++++++++++++
    There are other tasks that has to be executed only once
    (ie Inactivate the Below in One of the container)
    SIL_TimeOfDayDimension
    SIL_DayDimension_GenerateSeed
    SIL_DayDimension_CleanSeed
    SIL_TimeOfDayDimension
    SIL_CurrencyTypes
    SIL_Stage_GroupAccountNumberDimension_FinStatementItem
    SIL_ListOfValuesGeneral_Unspecified
    PLP_StatusDimension_Load_StaticValues
    SIL_TimeDimension_CalConfig
    SIL_GlobalCurrencyGeneral_Update <dont Inactivate this> <check for any issues while running>
    Update Parameters <dont Inactivate this> <check for any issues while running>
    +++++++++++++++++++++++++++++++++++
    Task :SDE_ORA_EmployeeDimension_Addresses
    Unique Index Failure on "W_EMP_D_ADR_TMP_U1"
    As you are load from 11.5.10 & R12 , for certain data which is common across the systems the ETL index creation Fails.
    Customize the Index Creation in DAC with another unique columns (data_source_numID).
    ++++++++++++++++++++++++++++++++++++
    Task :SDE_ORA_GeoCountryDimension
    Unique Index Failure on "W_GEO_COUNTRY_DS_P1 " As you are loading from 11.5.10 & R12 , for certain data which is common across the systems the ETL index creation Fails.
    Option1) Customize the Index Creation in DAC with another unique columns (data_source_numID)
    ++++++++++++++++++++++++++++++++++
    This changes were mandate
    Regards,
    Kumar

  • Source Optional vs Target Optional

    Using SDDM 3.3.0.747.
    I've got a situation with the Logical Model that is confusing me.  Can anyone shed some light for me?
    I have two Entities (i.e. the things that look like tables) in my logical model.  One table is Orders.  The other is Order Detail.
    If a row exists in the Order Detail table, it must be tied (via a PK/FK) to a row in the Orders table.  In other words, the Order Detail can't just be a random row -- it has to "belong" to an order.  There can be many order detail rows for a given Order (i.e. you can order multiple things on the same order, and each thing is stored on its own row in the Order Detail table).
    However, a single row in the Orders table doesn't necessarily have to be associated with any rows in the Orders Detail table.  For example, perhaps we just started the order and got interrupted before actually adding anything that we wanted to order.  So we can have an order number (PK in the Orders table) that doesn't yet tie to any rows in the Order Detail table.
    What I've just described seems to me to be a 1..0M, meaning that a single Order may be associated with any number of Order Detail rows, or none at all.  If the Orders table is on the left and the Order Detail table is on the right, I THINK I should see this connector: -|-----0<-
    I have set the Relation Properties as follows:
    Source Cardinality
    Source: Orders
    Source to Target Cardinality: - --<-*  (1 Order for many Order Details)
    Source Optional: UNCHECKED
    Target Cardinality
    Target: Order Detail
    Target to Source Cardinality: --1 (1 Order per Order Detail)
    Target Optional: CHECKED
    Now here's where my brain is getting all wonky: The O indicating an optional constraint is located on the Orders end of the connection line.  -|O-----<-   and to me, that feels backwards.  It feels like that's telling me that "multiple Order Detail lines can be connected to either zero or 1 order", and that's not correct.  An order detail line MUST be connected to an Order.  (Sure wish I could include a screenshot or two).
    I feel that the O should be on the Order Detail end of the line, which to me says "one order is associated with any number of detail lines, including zero".
    So to me, the position of the O feels wrong.
    I can move it into what I think is the "correct" position only by reversing the CHECKED and UNCHECKED status of the Source Optional and Target Optional boxes.  When I do that, the O moves, but the relation properties screen now appears wrong to me.
    I know this has to be really basic Data Modeling 101 stuff, but I'm just not getting it.  And I HAVE had my morning Starbucks, so that's not the trouble.
    Any help in getting me thinking straight?

    AH-HAH!!!  Now I get it.  If we forget Orders and Order Details and instead look at a list of Women and a list of Children, it makes more sense.
    There is a one-to-zero-or-many relationship between Women and Children.   I have a list of Women.  For each woman, it is her option to have children or not.  The option rests with the Woman. 
    But a child has no such option. If the child exists, it has no option as to whether or not it had a mother.
    So the words 'Target Optional' do, in fact, mean 'The Target Is Optional'. If I am looking at one woman, it is indeed optional as to whether or not that woman has children.  Children (target) are not required (i.e. they are optional) for every woman (source).  Therefore, there will be an O on the relationship cardinality line, indicating that the relationship is optional.
    What was hard to explain was the positioning of the O on the cardinality line.  The presence of the O simply means that the relationship is optional.  That much is easy.
    But I was expecting the O to be positioned on whichever end of the relationship is the optional one (i.e. children are optional, so the O should be positioned on the children's end of the line), and that is not true.  The position of the O indicates which entity the option rests with.  (Which, I contend, is still backwards, but at least now I can explain it.  I don't like it, but I can explain it.)  The woman may, at her option, have one or more children.  That's the way to translate the cardinality line into spoken words when the O is on the woman's (i.e. source) end of the line. 
    Philip, thank you for hanging in there with me.  Correct Answer awarded.

Maybe you are looking for

  • Not recognizing my 640x480 device.

    Flash Media Live Encoder sees my 640x480 device as 720x480 or some other wide screen device. I'm coming in through firewire from our Sony Anycast. FMLE sees it at 640x480 on my Mac but not on my pc. Thanks

  • Animation won't save in CS2

    I have photoshop cs2 and I have had it for a while now :) I did a lot of animation in p7 when I had it but then my company upgraded my photoshop to cs2. I have looked at EVERY freakin tutorial under the sun and it tells me to save it as a .gif file u

  • My friend doesn't appear online, but they are.

    It always says my friend is offline, but he actually is online..this happens with 2 people for me...it also is a problem for my friends, like it never says i am online either...HELPPPP!!!

  • X-Fi Xtreme Gamer Fatal1ty with Creative Gigaworks S

    Hy I have the X-Fi Xtreme Gamer Fatalty and I'm about to buy the Creative Gigaworks S750. But first I really need to hear some opinions about it, because I really hate to buy something that I don't know if I'm going to like. I'm asking opinions about

  • How do I sync my iphone with my iPad?

    I am trying to sync my iPhone with my iPad so that my texts/iMessages appear on both.  I recently had to buy a new phone and this feature isn't working.