Transaction Data Load from BW info provider

Hi gurus,
I am doing a transaction dat aload from the BW data feed to BPC cube. When I did the validation of the transformation file the task was sucessfully completed with some skipped records as expected based on the conversion files.
ValidateRecords = YES
[List of conversion file]
Conversion file: DataManager\ConversionFiles\EXAMPLES\TIMECONV.XLS!CONVERSION
Conversion file: DataManager\ConversionFiles\EXAMPLES\VERSIONCONV.XLS!CONVERSION
Conversion file: DataManager\ConversionFiles\EXAMPLES\ACCOUNTCONV.XLS!CONVERSION
Conversion file: DataManager\ConversionFiles\EXAMPLES\ENTITY.XLS!CONVERSION
Record count: 25
Accept count: 13
Reject count: 0
Skip count: 12
This task has successfully completed
but when did run package, load fails.
/CPMB/MODIFY completed in 0 seconds
/CPMB/INFOPROVIDER_CONVERT completed in 0 seconds
/CPMB/CLEAR completed in 0 seconds
[Selection]
InforProvide=ZPCAOB01
TRANSFORMATION= DATAMANAGER\TRANSFORMATIONFILES\EXAMPLES\ABF_TRANS_LOAD.xls
CLEARDATA= Yes
RUNLOGIC= No
CHECKLCK= No
[Messages]
Task name CONVERT:
No 1 Round:
Error occurs when loading transaction data from other cube
Application: TEST Package status: ERROR
This is a fresh system and we are doing the data load for the first time.  we are using BPC NW 7.5 with SP4
Is there something which we are missing which is supposed to be performed before starting the load for the first time.
my transformation file is as below
*MAPPING
Account=0ACCOUNT
Currency=0CURRENCY
DataSrc=*NEWCOL(INPUT)
Entity=ZPCCCPLN
ICP=*NEWCOL(ICP_NONE)
Scenario=0VERSION
Time=0FISCPER
SIGNEDDATA=0AMOUNT
*CONVERSION
TIME=EXAMPLES\TIMECONV.XLS
SCENARIO=EXAMPLES\VERSIONCONV.XLS
ACCOUNT=EXAMPLES\ACCOUNTCONV.XLS
ENTITY=EXAMPLES\entity.xls
Thanks a lot in advance.
Regards
Sharavan

Hi Gersh,
Thanks for the quick response.
I checked in SLG1 and i have the below error in the log.
Class: CL_UJD_TRANSFORM_PROXY:CONSTRUCTOR Log:DATATYPE 3
Class: CL_UJD_TRANSFORM_PROXY:CONSTRUCTOR Log:CURRENT_ROUND 1
Error occurs when loading transaction data from other cube
Message no. UJD_EXCEPTION137
we are on SP4, BPC NW 7.5. Please advice
Regards
Sharavan.

Similar Messages

  • Regarding master data ,transactional data loading from flat file

    Hi friends,
    Please tell me how to load master data and transactional data from flat file ....
    Thanks in advance ,
    Regards,
    ramnaresh.

    Hi,
    Please use the 'search forum' functionality and search the BI Forum with say 'flat file loading'.  You would get plenty of links of previous threads.
    BR/
    Mathew.

  • Transaction Data Load from Infoprovider Problem?

    Hi Dear colleagues;
    I have a load transaction from infoprovider data problem. I have transformation file and i prepared our mapping. my transformation file like this:
    http://img822.imageshack.us/i/capture1cw.jpg/
    Then , run package .The result like this :
    http://img291.imageshack.us/i/capture2m.jpg/
    and also Package status situation like this : 
    http://img836.imageshack.us/i/53445642.jpg/
    As you see in 3rd picture .. Submit count seems 18525. So i have 18k datas in my BW cube. Lets look Bw side
    http://img513.imageshack.us/img513/5760/capture4o.jpg
    as you see My BW Cube is empty
    http://img137.imageshack.us/img137/6451/capture3dg.jpg
    and there is no pack in my BW cube.
    I tried to explain my problem in this way.
    What should i do in this situation?
    Edited by: Breathe_ on Jan 5, 2011 10:34 AM

    Thanks for your answer nilanjan. But when i check reject records and reject datas , i couldn't see any hint. Otherwise i want to ask one more question:
    my transformation select row like this :
    SELECTION=0SALESORG,1000;0SALESORG,2000;0SALESORG,3000
      can i write multiple selection about 0SALESORG ?  ( EX: 0SALESORG,2000,3000...) are there any syntax about that?
    Take it easy...

  • Error while doing Transaction data loading from ECC to BW

    HI,
    I faced error in data records so i corrected it manually by going PSA Maintenance. then now when i am trying to schedule data laod by going to that Data source then right click on Manage then select Request which i have updated with correct records. then Again right click-->UPDATE WITH SCHEDULER .now next screen is SCHEDULER(PSA SUBSQUENT UPDATE) here i am not getting any Data Target means data target field is disabled  so i cannot schedule it for load.
    Any solution for this error.
    Thanks
    Nilesh Pathak

    Hi,
    I guess you have not deleted the request from the InfoCube 'Manage' Screen. If the request is already updated to your InfoCube/ODS ,then it will not show the Data Target in PSA Scheduler.
    You can try to re-load the data using 'Only PSA'  in the InfoPackage and then correct the errors and load the data using Update option from the Manage screen.
    Thanks

  • Automated data load from APO to SEM transactional cube

    Hi ,
    We have BW-SEM system integrated with APO system.
    I could see automated data loads from APO to SEM transactional cube ..
    Infopackage name as "Request loaded using the APO interface without monitor log" ..
    I don't see any infopackage by this name in both the systems ( APO & SEM )..
    I am not sure how it configured ..
    Appreciate any inputs on how its happens .....
    Thanks in advance

    Hi,
    As I mentioned the starting point will be the tcode BPS0. There will be 2 planning areas created (if I am correct) one for SEM cube and the other for APO cube. The better way to find it will be goto tcode se16 and enter UPC_BW_AREA and key in the cube names in the cube field. this will give you the planning area names now look for a multiplanning area which has the 2 areas included in them (this is available in table UPC_AREAM).
    then goto BPS0 and you will have to find which function is being used to post the data.
    thanks

  • Help Required regding: Validation on Data Loading from Flat File

    Hi Experts,
    I need u r help in the following issue.
    I need to validated the transactional data loading to the GL Cube from Flat file,
    1) The transactional data to the Cube to be loaded <b>only if master data</b> record exists for the <b>“0GL_ACCOUNT”</b> info object.
    2) If the master data record does not exits then the record need to be skipped from the loading and after the loading  the system should throw a message saying that these many records have been skipped (if there are any skipped records.).
    I would really appriciate u r help and suggestions on solving this issue.
    Regds
    Hari

    Hi, write a <b>start routine</b> in transfer rules like this.
      DATA: l_s_datapak_line type TRANSFER_STRUCTURE,
            l_s_errorlog TYPE rssm_s_errorlog_int,
            <b>l_s_glaccount type /BI0/PGLACCOUNT</b>,
            new_datapak type tab_transtru.
           refresh new_datapak.
           loop at datapak into l_s_datapak_line.
           select single * from /BI0/PGLACCOUNT into l_s_glaccount
             where CHRT_ACCTS eq l_s_datapak_line-<b>field name in transfer structure/datsource for CHRT_ACCTS</b>
    and GL_ACCOUNT eq l_s_datapak_line-<b>field name in transfer structure/datsource for GL_ACCOUNT</b>
    and OBJVERS eq 'A'.
           if sy-subrc eq 0.
             append l_s_datapak_line to new_datapak.
           endif.
           endloop.
           datapak = new_datapak.
           if datapak[] is initial.
    abort <> 0 means skip whole data package !!!
             ABORT = 4.
           else.
             ABORT = 0.
           endif.
    i have already some modifications but U can slightly change it to suit your need.
    regards
    Emil

  • Number of parallel process definition during data load from R/3 to BI

    Dear Friends,
    We are using Bi7.00. We have a requirement in which i should increase the number of parallel process during data load from R/3 to BI.  I want to modify this for a particular data source and check.Can experts provide helpful answers for the following question.
    1) When load is taking place or have taken place, where can we see how many parallel process that particular load has taken.
    2) Where should i change the setting for the number of parallel process for data load (from R/3 to BI) and not within BI.
    3) How system works and what will be net result of increasing or decreasing the number of parallel process.
    Expecting Experts help.
    Regards,
    M.M

    Dear Des Gallagher,
    Thank you very much for the useful information provided. The following was my observation.
    From the posts in this forum, i was given to understand that the setting for specific data source can be done in the infopackage and DTP level, i carried out the same and found that there is no change in the load, i.e., system by default takes only one parallel process even though i maintained 6.
    Can you kindly explain about the above mentioned point. i.e.,
    1) Even though the value is maintained in the infopackage level , will system consider it or not. -> if not then from which transaction system is able to derive the 1 parallel process.
    Actually we wanted to increase the package size but we failed because i could not understand what values have to be maintained  -> can you explain in detail
    Can you calrify my doubt and provide solution?
    Regards,
    M.M

  • Data loading from flat file to cube using bw3.5

    Hi Experts,
                       Kindly give  me the detailed steps with screens  about Data loading from flat file to cube using bw3.5
           ...............Please

    Hi ,
    Procedure
    You are in the Data Warehousing Workbench in the DataSource tree.
           1.      Select the application components in which you want to create the DataSource and choose Create DataSource.
           2.      On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy.
    The DataSource maintenance screen appears.
           3.      Go to the General tab page.
                                a.      Enter descriptions for the DataSource (short, medium, long).
                                b.      As required, specify whether the DataSource builds an initial non-cumulative and can return duplicate data records within a request.
                                c.      Specify whether you want to generate the PSA for the DataSource in the character format. If the PSA is not typed it is not generated in a typed structure but is generated with character-like fields of type CHAR only.
    Use this option if conversion during loading causes problems, for example, because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type.
    In this case, after you have activated the DataSource you can load data into the PSA and correct it there.
           4.      Go to the Extraction tab page.
                                a.      Define the delta process for the DataSource.
                                b.      Specify whether you want the DataSource to support direct access to data.
                                c.      Real-time data acquisition is not supported for data transfer from files.
                                d.      Select the adapter for the data transfer. You can load text files or binary files from your local work station or from the application server.
    Text-type files only contain characters that can be displayed and read as text. CSV and ASCII files are examples of text files. For CSV files you have to specify a character that separates the individual field values. In BI, you have to specify this separator character and an escape character which specifies this character as a component of the value if required. After specifying these characters, you have to use them in the file. ASCII files contain data in a specified length. The defined field length in the file must be the same as the assigned field in BI.
    Binary files contain data in the form of Bytes. A file of this type can contain any type of Byte value, including Bytes that cannot be displayed or read as text. In this case, the field values in the file have to be the same as the internal format of the assigned field in BI.
    Choose Properties if you want to display the general adapter properties.
                                e.      Select the path to the file that you want to load or enter the name of the file directly, for example C:/Daten/US/Kosten97.csv.
    You can also create a routine that determines the name of your file. If you do not create a routine to determine the name of the file, the system reads the file name directly from the File Name field.
                                  f.      Depending on the adapter and the file to be loaded, make further settings.
    ■       For binary files:
    Specify the character record settings for the data that you want to transfer.
    ■       Text-type files:
    Specify how many rows in your file are header rows and can therefore be ignored when the data is transferred.
    Specify the character record settings for the data that you want to transfer.
    For ASCII files:
    If you are loading data from an ASCII file, the data is requested with a fixed data record length.
    For CSV files:
    If you are loading data from an Excel CSV file, specify the data separator and the escape character.
    Specify the separator that your file uses to divide the fields in the Data Separator field.
    If the data separator character is a part of the value, the file indicates this by enclosing the value in particular start and end characters. Enter these start and end characters in the Escape Charactersfield.
    You chose the; character as the data separator. However, your file contains the value 12;45 for a field. If you set u201C as the escape character, the value in the file must be u201C12;45u201D so that 12;45 is loaded into BI. The complete value that you want to transfer has to be enclosed by the escape characters.
    If the escape characters do not enclose the value but are used within the value, the system interprets the escape characters as a normal part of the value. If you have specified u201C as the escape character, the value 12u201D45 is transferred as 12u201D45 and 12u201D45u201D is transferred as 12u201D45u201D.
    In a text editor (for example, Notepad) check the data separator and the escape character currently being used in the file. These depend on the country version of the file you used.
    Note that if you do not specify an escape character, the space character is interpreted as the escape character. We recommend that you use a different character as the escape character.
    If you select the Hex indicator, you can specify the data separator and the escape character in hexadecimal format. When you enter a character for the data separator and the escape character, these are displayed as hexadecimal code after the entries have been checked. A two character entry for a data separator or an escape sign is always interpreted as a hexadecimal entry.
                                g.      Make the settings for the number format (thousand separator and character used to represent a decimal point), as required.
                                h.      Make the settings for currency conversion, as required.
                                  i.      Make any further settings that are dependent on your selection, as required.
           5.      Go to the Proposal tab page.
    This tab page is only relevant for CSV files. For files in different formats, define the field list on the Fields tab page.
    Here you create a proposal for the field list of the DataSource based on the sample data from your CSV file.
                                a.      Specify the number of data records that you want to load and choose Upload Sample Data.
    The data is displayed in the upper area of the tab page in the format of your file.
    The system displays the proposal for the field list in the lower area of the tab page.
                                b.      In the table of proposed fields, use Copy to Field List to select the fields you want to copy to the field list of the DataSource. All fields are selected by default.
           6.      Go to the Fields tab page.
    Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If you did not transfer the field list from a proposal, you can define the fields of the DataSource here.
                                a.      To define a field, choose Insert Row and specify a field name.
                                b.      Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BI.
                                c.      Instead of generating a proposal for the field list, you can enter InfoObjects to define the fields of the DataSource. Under Template InfoObject, specify InfoObjects for the fields in BI. This allows you to transfer the technical properties of the InfoObjects into the DataSource field.
    Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field.
                                d.      Change the data type of the field if required.
                                e.      Specify the key fields of the DataSource.
    These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping.
                                  f.      Specify whether lowercase is supported.
                                g.      Specify whether the source provides the data in the internal or external format.
                                h.      If you choose the external format, ensure that the output length of the field (external length) is correct. Change the entries, as required.
                                  i.      If required, specify a conversion routine that converts data from an external format into an internal format.
                                  j.      Select the fields that you want to be able to set selection criteria for when scheduling a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage.
                                k.      Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage.
                                  l.      Under Field Type, specify whether the data to be selected is language-dependent or time-dependent, as required.
           7.      Check, save and activate the DataSource.
           8.      Go to the Preview tab page.
    If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview.
    This function allows you to check whether the data formats and data are correct.
    For More Info:  http://help.sap.com/saphelp_nw70/helpdata/EN/43/01ed2fe3811a77e10000000a422035/content.htm

  • Transactional data loads PIR, IM stock, Open PO's documentation

    I have to make a documentation of the process of transactional data loads.
    transactional data loads pirchase info rec, IM stock, Open PO's
    the transactional data is live in both sap and legacy, so they have to be even in both systems in all stages of this process.
    how do i maintain that is the question.
    send me any details regarding this.
    thank you
    sridhar

    Check these three thigns
    /n/sapapo/CCR
    /n/sapapo/CQ
    Check what type of stock has active IM, and what type of stock went in after you created the GR.
    Still if you have any problem let us know.
    My

  • Master Data/transactional Data Loading Sequence

    I am having trouble understanding the need to load master data prior to transactional data.  If you load transactional data and there is no supporting master data, when you subsequently load the master data, are the SIDs established at that time, or will then not sync up?
    I feel in order to do a complete reload of new master data, I need to delete the data from the cubes, reload master data, then reload transactional data.  However, I can't explain why I think this.
    Thanks,  Keith

    Different approach is required for different scenario of data target.  Below are just two scenarios out of many possibilities.
    Scenario A:
    Data target is a DataStore Object, with the indicator 'SIDs Generation upon Activation' is set in the DSO maintenance
    Using DTP for data loading.
    The following applies depending on the indicator 'No Update without Master Data' in DTP:
    - If the indicator is set, the system terminates activation if master data is missing and produces an error message.
    - If the indicator is not set, the system generates any missing SID values during activation.
    Scenario B:
    Data target has characteristic that is determined using transformation rules/update rules by reading master data attributes.
    If the attribute is not available during the data load to data target, the system writes initial value to the characteristic.
    When you reload the master data with attributes later, you need to delete the previous transaction data load and reload it, so that the transformation can re-determine the attributes values that writes to the characteristics in data target.
    Hope this help you understand.

  • BPC:: Master data load from BI Process chain

    Hi,
    we are trying to automatize the master data load from BI.
    Now we are using a package with:
    PROMPT(INFILES,,"Import file:",)
    PROMPT(TRANSFORMATION,%TRANSFORMATION%,"Transformation file:",,,Import.xls)
    PROMPT(DIMENSIONNAME,%DIMNAME%,"Dimension name:",,,%DIMS%)
    PROMPT(RADIOBUTTON,%WRITEMODE%,"Write Mode",2,{"Overwirte","Update"},{"1","2"})
    INFO(%TEMPNO1%,%INCREASENO%)
    INFO(%TEMPNO2%,%INCREASENO%)
    TASK(/CPMB/MASTER_CONVERT,OUTPUTNO,%TEMPNO1%)
    TASK(/CPMB/MASTER_CONVERT,FORMULA_FILE_NO,%TEMPNO2%)
    TASK(/CPMB/MASTER_CONVERT,TRANSFORMATIONFILEPATH,%TRANSFORMATION%)
    TASK(/CPMB/MASTER_CONVERT,SUSER,%USER%)
    TASK(/CPMB/MASTER_CONVERT,SAPPSET,%APPSET%)
    TASK(/CPMB/MASTER_CONVERT,SAPP,%APP%)
    TASK(/CPMB/MASTER_CONVERT,FILE,%FILE%)
    TASK(/CPMB/MASTER_CONVERT,DIMNAME,%DIMNAME%)
    TASK(/CPMB/MASTER_LOAD,INPUTNO,%TEMPNO1%)
    TASK(/CPMB/MASTER_LOAD,FORMULA_FILE_NO,%TEMPNO2%)
    TASK(/CPMB/MASTER_LOAD,DIMNAME,%DIMNAME%)
    TASK(/CPMB/MASTER_LOAD,WRITEMODE,%WRITEMODE%)
    But we need to include these tasks into a BI process chain.
    How can we add the INFO statement into a process chain?
    And how can we declare the variables?
    Regards,
    EZ.

    Hi,
    i have followed your recomendation, but when i try to use the process /CPMB/MASTER_CONVERT, with the parameter TRANSFORMATIONFILEPATH and the root of the transformation file as value, i have a new problem. The value only have 60 char, and my root is longer:
    \ROOT\WEBFOLDERS\APPXX\PLANNING\DATAMANAGER\TRANSFORMATIONFILES\trans.xls
    How can we put this root???
    Regards,
    EZ.

  • Data loading from DSO to Cube

    Hi,
    I have a question,
    In book TBW10 i read about the data load from DSO to InfoCube
    " We feed the change log data to the InfoCube, 10, -10, and 30 add to the correct 30 value"
    My question is cube already have 10 value, if we are sending 10, -10 and 30 Values(delta), the total should be 40 instead of 30.
    Please some one explaine me.
    Thanks

    No, it will not be 40.
    It ll be 30 only.
    Since cube already has 10, so before image ll nullify it by sending -10 and then the correct value in after immage ll be added as 30.
    so it ll be like this 10-10+30 = 30.
    Thank-You.
    Regards,
    Vinod

  • Error is data loading from 3rd party source system with DBCONNECT

    Hi,
    We have just finished an upgrade of SAP BW 3.10 to SAP NW 7.0 EHP1.
    After the upgrade, we are facing a problem with data loads from a third party Oracle source system using DBConnect.
    The connection is working OK and we can see the tables in the source system. But we cannot load the data.
    The error in the monitor is as follows:
    'Error message from the source system
    Diagnosis
    An error occurred in the source system.
    System Response
    Caller 09 contains an error message.
    Further analysis:
    The error occurred in Extractor .
    Refer to the error message.'
    But, unfortunately, the error message has no further information.
    If we look at the job log in sm37, the job finished with the following log -                                                                               
    27.10.2009 12:14:19 Job started                                                                                00           516          S 
    27.10.2009 12:14:19 Step 001 started (program RSBATCH1, variant &0000000000119, user ID RXSAHA)                    00           550          S 
    27.10.2009 12:14:23 Start InfoPackage ZPAK_4FMNJ2ZHNNXC6HT3A2TYAAFXG                                              RSM1          797          S 
    27.10.2009 12:14:24 Element NOAUTHORITYCHECK is not available in the container                                     OL           356          S 
    27.10.2009 12:14:24 InfoPackage ZPAK_4FMNJ2ZHNNXC6HT3A2TYAAFXG created request REQU_4FMXSQ6TLSK5CYLXPBOGKF31G     RSM1          796          S 
    27.10.2009 12:14:24 Job finished                                                                                00           517          S 
    In a BW 3.10 system, there is no  message related to element NOAUTHORITYCHECK. So, I am wondering if this is something new in NW 7.0.
    Thanks in advance,
    Rajib

    There will be three things to get the errors like this
    1.RFC CONNECTION FAILED
    2.CHECK THE SOURCE SYSTEM
    3.CHECK IT OUT WITH Oracle Consultants WEATHER THEY ARE FILLING UP THE LOADS.TELL THEM TO STOP
    4.CHECK I DOC PROCESSING
    5.FINALLY MEMORY ISSUES.
    6.CATCH THE DATA SOURCE FIRST CHANGE IT AND THEN ACTIVATE AND RUN THE LOAD
    7.LAST IS MEMORY ISSUE.
    and also Check the RFC connection in SM59 If  it is ok then
    check the SAP note : 692195 for authorization
    Santosh

  • How to find the data loaded from r/3 to bw

    hi
    how to find the data loaded from r/3 to bw is correct . i am not able to find which feild in the query is connected to which feild in the r/3 . where i am geting the data from r/3 . is there any process to find which feild  and table the data is comming from . plz help
    thanks in advance to u all

    Hi Veda ... the mapping between R/3 fields and BW InfoObjects should take place in Transfer Rules. Other transformation could take place in Update Rule.
    So you could proceed this way: look at InfoProvider Data Model and see if the Query does perform any calculation (even with Virtual keyfigures / chars). Than go back to Update Rules and search for other calculation / transformation. At least there are Tranfer Rule and eventually DataSource / Extraction Enhancements.
    As you can easily get there are many points where you have to look for ... it's a quite complex work but very usefull.
    Once you will have identified all mappings / transfromation see if BW data matchs R/3 (considering calculations ...)
    Good job
    GFV

  • How to rectify the errors in master data loads & transactional data loads?

    hy,
    please any one tell me
    How to rectify the errors in master data loads & transactional data loads?
    thnQ
    Ravi

    Hi,
    Please post specific questions in the forum.
    Please explain the error you are getting.
    -Vikram

Maybe you are looking for

  • HT4059 How do I close a book to look at a different one?

    When I open iBooks the page I am reading fills the whole screen, so I have no icons to control the app - all I can do is page backward or forward through the book that happens to be open.  Surely not right?

  • RTMP Streaming Works, HTTP Streaming Does Not

    I am using the following to stream video live: - Windows Server 2008 with Flash Media Server 5. - Flash Media Live Encoder 3.2 - This tutorial step by step to setup the manifests: Stream live multi-bitrate video over HTTP to Flash and iOS - YouTube -

  • Labelfunction in a datagrid

    maybe someone can help me out with this... i am able to get a labelfunction to work fine (it sums the totals of the dataprovider) like so: in my datagridcolumn with id = "grid1" i have labelFunction="sumFunction" my sumFunction looks like this:      

  • How do I remove the templates when emailing a photo?

    When emailing a photo I do not want to have a template on some of my photos. How do I remove this effect?

  • Unable to connect to internet error on torch

    when i try to use the internet i get the error "unable to connect to the internet please try later" i have BIS calls are fine My home wifi broadband is working on my laptop neither home wifi or mobile network connect but both are ticked/enabled it ha