Transporting SPM-Extractors

Hi,
i just experienced some Problems with transporting the SPM Extractors (i know it's not the recommanded way).
The Extract-Structures don't have an ObjectCatalogEntry. By reactivating them they get one, but the generated data types in them still don't have an Entry. Is there a more comfortable way then reactivating every single Data-Element?
Thanks and best regards
Pascal

Hi Pascal,
As you mentioned this is not the recommended way to transport. What we suggest is transporting metadata and generating in each client separately. If you are following the approach of generated objects currently there is no other way but to generate the object directory entry for each of them individually.
Thanks and Regards,
Divyesh

Similar Messages

  • SPM Extractors from SAP R/3

    Hi, I am quite new working with SPM and currently investigating options for data extraction activity from R/3 to SPM.
    I could see there is an extraction wiki, were I found very useful information, and could see the main options are:
    1. Use SAP provided ABAP extractors (as a starting kit probably)
    2. Write our own ABAP programs
    3. Using other ETL tool, like for example BOBJ IM (Business Objects Integration Manager)
    My main concern is regarding performance, as I expect to have to deal with high volume of data. I'd like to hear some past experience on how SAP provided extractors work. Have any of you implemented these extractors or had the chance of seeing them running?. The delta capability works fine? Is it a delta just based on a document creation/change data? or other changes (in the master data for example, where we do not always have a change date) are also captured?. How much customizing should I considerate will be requested (of course I understand this depends on how much customizing do we have in our R/3).
    Is it worth exploring these standard extractors?. Or should I better start considering other option like BOBJ-IM? I think I will have to use BOBJ-IM anyway for some non-SAP sources of data which I also have to include.
    Any experience with these extractors is appretiated!.
    Many thanks.
    Regards,
    Claudia.

    Hi Claudia,
    Below are some of my comments in regards to SPM extractors.
    1.  Using SPM extractors as a starter kit is a great idea.  This will bring in all the necessary data from standard R/3 tables with the necessary logic to populate SPM.
    2.  Taking the route of writing you own ABAP program / extractors may be necessary only if you have quite a lot of customization in R/3 as your source for data.
    3.  Using BOBJ IM is necessary for extracting data from non-SAP data source.
    As always loading initial could be a lengthy process.  To speed up this process it is recommended to break it down to smaller manageable loads.
    Delta is primarily based on create / change date for both master data and transaction data.
    In terms of customizations, you are right depends on what you are looking for, you may end up adding new fields or writing some exit / code to add business logic to existing fields.  Have seen customers do a combination of both.
    Have seen quite a lot of customers take the route of delivered extractor which gives them a great start.  Secondly looking at the extractors you can tell exactly what tables and fields are used to extract data, that way even if you are planning to build you own extractors you know exactly what fields are needed.
    Regards,
    Rohit

  • SPM Extractors Starter Kit

    Hi,
    I am working with SPM extracting data from one of our R/3 systems (R/3 4.7). I installed the extractors in our system, copy the existing project delivered by SAP and then generated the extractors for those objects that we need. My question is regarding deltacapability. I read delta is available for some objects, based on their creation, posting or last change date. Nevertheless I cannot make any of them work in delta mode (even for those where there is a date available for delta extraction, for example Cost Centers, Invoices, POs.). I go to transaction Z_SA_EXTR, select project and object, and then click on Start. I immediately get the message 'Object file created successfully' (what is true, the file is there). But I am never asked whether I want to execute it in full, init or delta mode.
    The SPM application will be installed in a standalone server (not in any of our existing BW systems), so I will be generating flat files to be FTP to the final destination. Is it that if I do not have a datasource for BW the delta mechanism is not available?.
    Please clarify me on how this works. Many thanks!
    Claudia.

    Hi Claudia,
    SPM extractors do run in delta mode.  SPM applications maintains the previous successful load data & time. When extracted again the previous recorded data & time and current date & time range is used to bring in the delta records.  This date range is automatically passed to the info-package during extraction date filter to bring in delta extract.  This is also the reason only one info-package has to be created against the SPM extractors in BW.
    When executing these extractors in source system using RSA3 (extractor test) you can pass the date range to retrieve delta records for testing.
    The very first time you run the extractors it will run in INIT mode, meaning bringing all the records from beginning of time till today and the next load will be from previous successful load till today in DELTA mode.
    If you want to load multiple INIT loads, meaning break the INIT load into multiple pieces use the following steps:
    - Specify a date range in the info-package on BW side to bring in one year worth of data (01/01/2009 to 12/31/2009)
    - Make sure you delete the specific previous successful load date time stamp from the table OPM_SOURCE_ST which maintains the extractors status.  This is the table that is used to figure out which was the previous successful load.
    - Repeat the above 2 steps depending on the number of INIT loads you want to perform
    - For the final load only specify the from date and leave the to date blank, this will bring in data up to the current date.  After successful load the system will record the current data in the table and will be used for the next extract as delta.
    Keep in mind that a factor of safety of 5 days is used during extraction to make sure all the records are extracted.
    Hope this helps.
    In regards to your previous question on package size, please read the article posted on this topic - /people/divyesh.jain/blog/2010/07/20/package-size-in-spend-performance-management-extraction
    Regards,
    Rohit

  • SPM  Extractor kit 2.1

    Hi ,
    I would like to check with you guys  in this below post if anyone has this similar experience.
    Re: Help in Spend Analytics 2.1 Data model
    Thanks,
    Vikas

    Hi Vikas,
    In terms of what fields are in the data model vs. what fields are provided by the extractors, they certainly do not match.  Below are few reason:
    1. Some fields are calculated once data is being loaded into the SPM data model.
      Example: Transaction currency measures to Global currency measures.  Similarly for quantity measures.
    2. Some fields may not be available in the source and may come from a different source.  But if your source system maintains them you can specify the source and populate them by enhancing the extractor.
      Example: Cleansed Category (or just Category).  Typically the source system category is different from cleansed category, similarly Source System Supplier and Supplier and few other master data objects.
    3.  Few fields are determined when data loading data into SPM on the BI side.
    Example: Upload ID - which is a unique ID assigned for every data load.  Source system (not to be confused with the physical source of your data) - is a fields that logically identifies / separates source system dependent master / transaction data.
    4.  Few other fields are concatenation of multiple fields, which is also generated when loading data into the SPM data model.
    Example: Document number + Item number + Source System to create a unique ID.
    I think there might be a few more that I forgot, but you get the general idea.  Hopefully this answers your questions.
    Regards,
    Rohit

  • CO-PA EXTRACTOR- HOW TO TRANSPORT?

    HI EVERUBODY,
    I ALREADY TRANSPORTED MY EXTRACTOR.
    WITH RSA3 IN THE SOURCE SYSTEM I CAN SEE THE DATA. NO ERRORS WITH THE EXTRACTOR.
    BUT, WHEN I RUN THE INFOPACKAGE, NO DATA IS RETRIEVED.
    THE EXTRACTOR IS CLIENT AND SYSTEM DEPENDENT. SO, IT IS FINE TO TRANSPORT IT? OR DO I HAVE TO CREATE IT AGAIN IN THE SAP R/3 QUALITY SYSTEM?
    IF I LOOK THE TIMESTAMP IS OK, WITH THE DATE I RUN THE INITIALIZATION, BUT NO DATA RETRIEVED. EVEN WITH FULL LOAD!!
    THANK YOU!!
    ROSANA

    Rosana,
    There are two types of extractors that can be created in CO-PA.
    1. Cost based extractor
    2. Account based extractor.
    Go to R3, use KEB0 transaction, this is a standard CO-PA extractor creator transaction.
    1. KEB0 and give the client name or system id (these gets changed as you transport with the corresponding system id and client). If you want a constant name you can skip it.
    2. The next screen select the fileds that you need.
    3. Save extractor and Activate it.
    4. go to RSA3 and see whether the extractor works and extracts records. If you have created Cost based extractor it will get the values from CO-PA specific tables. They are
    CE1<operating concern>,
    CE2<operating concern>
    CE3<operating concern>
    CE4<operating concern> so you can check yor extracted data against the tables.
    5. If RSA3 results are satisfactory, go to RSA6 (post installation data source) and keep the cursor on the extractor (CO->CO-PA->extractor) and click the transport button (the van) and give the package, transport request id etc.
    6 Your transportis ready and it is good to go.
    Hope this answers your question, award points if useful.
    Goodluck,
    Alex(Arthur Samson)

  • Can SPM realize AP requirement?

    Hi,
    Our company wanna to use SPM to realize AP reporting requirement? Is it possbile?
    The AP reporting requirement contains these fields:
    company code--from invoice data
    cost center---from invoice data
    GL account----from invoice data
    PO number----from invoice data
    vendor---from invoice data
    Invoice number-----from invoice data
    invoice amount-----from invoice data
    invoice paid amount----from AP data
    payment term -
    from AP data
    invoice status----from GL data
    invoice date------from invoice data
    invoice paid date-----from AP data
    So basiclly, the datasource cover AP and invoice. I wanna to know can SPM realize this requirement? There are some questions related:
    Will SPM data management tool just use some specific standard datasources? For example, 2LIS_06 for invoices, 2LIS_02 for purchase ordersu2026Per SPM documents I found on web, no AP SPM BI content mentioned. Can 0FI_AP_4 datasource be used by SPM?
    Shall we use customized DSO in SPM? Not standard ones---0ASA_DSXX
    Can datasource be enhanced in SPM?
    Thanks,
    Jack

    Hi Jack,
    Looking at your requirement we cover the Invoice portion of your requirement and lot more.  SPM has its own set of extractors that we used to pull data from ERP, we do not use any standard BW datasource.  But if you are already pulling all the requirement information [Invoice, AP] using the standard BW datasource you can load that data into the SPM data model [it will be your responsibility to map the necessary fields and load]. 
    As part of the standard SPM extractors and data model we do not have AP data, so you can either build a new set of objects [DSO and Cube] or reuse 0FI* objects to hold this data and add the necessary fields to the final SPM multiprovider in the reporting area.
    SPM application has functionality where you can introduce new data model [like mentioned above] and expose additional content in the SPM UI based on your needs.  You can also modify the SPM extractors to bring in additional information such as AP or any other master data that is necessary.
    Hope this answers your question.
    Regards,
    Rohit

  • SPM data extraction question: invoice data

    The documentation on data extraction (Master Data for Spend Performance Management) specifies that Invoice transactions are extracted from the table BSEG (General Ledger) . On the project I'm currently working the SAP ERP team is quite worried to run queries on BSEG as it is massive.
    However extract files are called BSIK and BSAK of; which seems to suggest that the invoices are in reality extracted from those accounts payable tables.
    Can someone clarify the tables really used, and if it's the BSIK/BSAK tables what fields are mapped?

    Hi Jan,
    Few additional mitigation thoughts which may help on the way as same concerns came up during our project .
    1) Sandbox Stress testing
    If available u2013 take advantage of an ECC Sandbox environment for extractor prototyping and performance impact analysis. BSEG can be huge (contains all financial movements), so e.g. BI folks typically do not fancy a re-init load for reasons outlined above. Ask basis to copy a full year of all relevant transactional data (normally FI & PO data) onto the sandbox and then run the SPM extractors for a full year extraction to get an idea about extraction system impact.
    Even though system sizing and parameters may differ compared to your P-box you still should get a reasonable idea and direction about system impact.
    2) In a second step you may then consider to break down the data extraction (Init/Full mode for your project) into 12 monthly extracts for the full year (this gives you 12 files from which you Init your SPM system with) with significant less system impact  and more control (e.g. can be scheduled over night).
    3) Business Scenario
    You may consider to use the Vendor related movements in BSAK/BSIK instead the massive BSEG cluster table as starting tables (and fetch/lookup BSEG details only on a need base) for the extraction process (Index advantages were outlined above already).
    Considering this we managed to extract Invoice data with reasonable source system impact.
    Rgrds,
    Markus

  • Package saving extractors

    Hi
    I am created new copa costing based extractore in developement system, when i am trying to save it is asking package name.
    In which package we have to save the extractor and how to transport this extractor to bw development system.
    Regards
    Rajini

    Hi,
    You can use Tcode se21 to create a package. We do not  transport from R/3 to Bw, instead collect it  in a transport so you can transport to Q and then Prod as needed.
    On the Bw side we normally replicate and use it.
    For BW 3.X Version
    1. RSA1->Source Systems->Dbl Click->Find Appl Component->Replicate for the data source
    2. Assign the data source to an info soure and do mapping and activate the transfer structure
    3. Create an info package to that info source and extract data.
    For BI 7.0
    1. RSA1->Source System->Display Data Sources->Select the particular data source and Replicate
    2. Create transformations, this will map your data source to the target (possibly a DSO)
    3. Create an info package for the infosource to load the data into PSA only (that's all you can do)
    4. Create a DTP for that info source and load data from PSA to the Data Traget.
    5. if you have multiple targets you will create multiple DTP as well.
    Hope this helps,
    Thanks,
    rahul

  • Filter Credit Memo's from  Invoice Reports

    Hi All,
    The SPM extractor for Invoices includes a filter based on Posting Key to only include types of 21, 22, 31, 32 which are Credit Memo, Reverse Invoice, Invoice, and Reverse Credit Memo. We have a requirement on our Invoice based reports to exclude Credit Memos, Reverse Invoices, and Reverse Credit Memo's. As the Posting Key is not brought into the data model in SPM, is any other way to create a filter to cater for this requirement?
    Regards,
    Gary Elliott

    Thanks Divyesh,
    We had thought this would be the case. We will look at extending the extractor to include the posting key. This will give the users the option to filter on the type of document.
    Many thanks,
    Gary

  • CRM Master Data Extraction

    Dear all,
    Can anyone help me with how to paper about CRM Master Data Extraction? I need to know on how to extract Business Partner and its attributes (Relationship, Address and others), also on how to extract CRM Marketing Attributes for Business Partner.
    Thanks for your help.
    Ricky

    Hello Ricky,
    I haven't found any conclusive document about loading Business Partner from CRM 4.0 to BW 3.5. If there is someone out there who knows about that documentation I would really appreciate it.
    Here is what we did regarding BP relations, marketing attributes and addresses:
    1. Marketing attributes:
    These data type is difficult, because you have to build a special extractor for each marketing attribute group. And because Marketing attributes cannot be transported, you cannot transport the extractor. This poses a big problem in many projects. Therefore we solved it by creating some view extractors on the appropriate CRM tables: AUSP, CABN, KSML, KLAH. I can give you some more details, if this will help.
    2. BP relations:
    We build a view extractor for this too, because it was much easier to extract the data from BUT050 and BUT051 than trying to find out, how the business content extractors do their work. Using this extractor, we build some special ODS and InfoCubes for the relations we were interested in.
    There is Business content for BP relations, but we kept to the generic extractors, because they were easier to verify.
    3. BP adresses:
    We use 0CRM_BPDEFADDR_ATTR to get the primary address.
    We use 0CRM_BPART_ATTR to get BP master data
    We use 0CRM_BPART_TEXT to get BP text.
    We have only about 300.000 BP in CRM, so we do a full update everyday.
    I am not sure, if these three data sources are part of the current BW 3.5 business content, because we started with BW 3.0 and did not check for BC updates.
    Kind regards,
    Jürgen
    Message was edited by: Jürgen Kirsch

  • How to transport a Enhancement Project for extractor (CMOD)

    Hi all,
    how can i transport a Function Enhancement for a R/3 extractor (created on the CMOD transaction)?.
    The extractor and the Structure Enhancement is already transported, but I go to SBIW and select the extractor under SAP-R/3 and select Function Enhancement, enter the project name and it's not transported.
    thanks a lot

    Hi Juan,
    Have you tried activating the components (under this project) and project in the target system.
    If you are unable to see the components/projects, you can try activating them in your development system, it will prompt for a transport request, which you can get imported into your target system. Hope it helps,
    Sree

  • Extractors containing the Transport Requests

    Hi all,
    Is there any extractors which runs on the tables related to Transport Requests?
    Is there any extractors which run on any of the following tables-E070, E070A, E070C, E070CREATE, E070DEP, E070L, E070M, E070TC, E070USE, E071, E071C, E071E, E071K, E071KF, E071KFINI, E071S, E07T.
    Any info on this will be highly appreciated.
    Regards,
    Deepa.

    In the meantime, did you find out something regarding this topic?
    thanks

  • How tochange the Update mode of a extractor in LBWE through a transport

    Hello Friends,
    I have a question on changing the update mode of the extration in LBWE. Can we change this in Dev system and transport it all the way to production. The client is not happy with openeing the client for this. I appreciate the immediate response.
    Thanks
    Simmi

    Hi,
    On the menu bar in the screen when you create the request, you'll see a button next to the delete button that lets you include objects in the request. Click that and you'll get the option to add objects from a request or multiple requests or freely selected objects.
    Or you can goto the Request/Task->Object list->Include objects.
    The other way you can do this is in the LBWE screen, click on the active/inactive button next to the extract structure and it'll also prompt you for a customizing request. If its active when you click the button it'll go inactive and prompt for a transport. Once saved in the transport, click on the inactive button to activate it again. It'll get saved in the same transport.
    Cheers,
    Kedar

  • Error while transporting Generic Datasource in R/3

    Hi All,
    I have created a generic datasource(R/3) on a view. As it was in $TMP, I have assigned the datasource, ZABCD, and the view, ZVIEW to a devlopment package and also to a transport request. I released the Transport request and when I tried to import it into Q, it throwed me an error with below messages:
    Extract structure ZOX**** is not active
    The extract structure ZOX**** of the DataSource ZABCD is invalid
    Errors occurred during post-handling RSA2_DSOURCE_AFTER_IMPORT for OSOA L
    The errors affect the following components:
    BC-BW (SAP Business Information Warehouse Extractors)
    The extract structure ZOX**** is the structure used by the datsource. I forgot to assign that to the earlier TR and I think the error is because the structure is missing in the TR. Am I correct?
    How should I deal with it now? DO I need to create a new TR with this structure alone as the earlier TR is already released?
    Thanks,
    RPK.

    As you comment it is possible error was not include the structure in earlier TR.
    You can create, in se09 or se10, an empty TR. In these transactions you have a buttoun with a box drawed. Set focus on your new empty TR and push button.
    Add object from TR transported with error and the missed structure.

  • TcpListener not working on Azure: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host

    Hi Everybody,
    i'm playing a little bit with Windows Azure and I'm blocked with a really simple issue (or maybe not).
    I've created a Cloud Service containing one simple Worker Role. I've configured an EndPoint in the WorkerRole configuration, which allows Input connections via tcp on port 10100.
    Here the ServiceDefinition.csdef file content:
    <?xml version="1.0" encoding="utf-8"?>
    <ServiceDefinition name="EmacCloudService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2014-01.2.3">
    <WorkerRole name="TcpListenerWorkerRole" vmsize="Small">
    <Imports>
    <Import moduleName="Diagnostics" />
    <Import moduleName="RemoteAccess" />
    <Import moduleName="RemoteForwarder" />
    </Imports>
    <Endpoints>
    <InputEndpoint name="Endpoint1" protocol="tcp" port="10100" />
    </Endpoints>
    </WorkerRole>
    </ServiceDefinition>
    This WorkerRole is just creating a TcpListener object listening to the configured port (using the RoleEnvironment instance) and waits for an incoming connection. It receives a message and returns a hardcoded message (see code snippet below).
    namespace TcpListenerWorkerRole
    using System;
    using System.Net;
    using Microsoft.WindowsAzure.ServiceRuntime;
    using System.Net.Sockets;
    using System.Text;
    using Roche.Emac.Infrastructure;
    using System.IO;
    using System.Threading.Tasks;
    using Microsoft.WindowsAzure.Diagnostics;
    using System.Linq;
    public class WorkerRole : RoleEntryPoint
    public override void Run()
    // This is a sample worker implementation. Replace with your logic.
    LoggingProvider.Logger.Info("TcpListenerWorkerRole entry point called");
    TcpListener listener = null;
    try
    listener = new TcpListener(RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint1"].IPEndpoint);
    listener.ExclusiveAddressUse = false;
    listener.Start();
    LoggingProvider.Logger.Info(string.Format("TcpListener started at '{0}:{1}'", RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint1"].IPEndpoint.Address, RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint1"].IPEndpoint.Port));
    catch (SocketException ex)
    LoggingProvider.Logger.Exception("Unexpected exception while creating the TcpListener", ex);
    return;
    while (true)
    Task.Run(async () =>
    TcpClient client = await listener.AcceptTcpClientAsync();
    LoggingProvider.Logger.Info(string.Format("Client connected. Address='{0}'", client.Client.RemoteEndPoint.ToString()));
    NetworkStream networkStream = client.GetStream();
    StreamReader reader = new StreamReader(networkStream);
    StreamWriter writer = new StreamWriter(networkStream);
    writer.AutoFlush = true;
    string input = string.Empty;
    while (true)
    try
    char[] receivedChars = new char[client.ReceiveBufferSize];
    LoggingProvider.Logger.Info("Buffer size: " + client.ReceiveBufferSize);
    int readedChars = reader.Read(receivedChars, 0, client.ReceiveBufferSize);
    char[] validChars = new char[readedChars];
    Array.ConstrainedCopy(receivedChars, 0, validChars, 0, readedChars);
    input = new string(validChars);
    LoggingProvider.Logger.Info("This is what the host sent to you: " + input+". Readed chars=" + readedChars);
    try
    string orderResultFormat = Encoding.ASCII.GetString(Encoding.ASCII.GetBytes("\xB")) + @"MSH|^~\&|Instrument|Laboratory|LIS|LIS Facility|20120427123212+0100||ORL^O34^ORL_O34| 11|P|2.5.1||||||UNICODE UTF-8|||LAB-28^IHE" + Environment.NewLine + "MSA|AA|10" + Environment.NewLine + @"PID|||patientId||""""||19700101|M" + Environment.NewLine + "SPM|1|sampleId&ROCHE||ORH^^HL70487|||||||P^^HL70369" + Environment.NewLine + "SAC|||sampleId" + Environment.NewLine + "ORC|OK|orderId|||SC||||20120427123212" + Encoding.ASCII.GetString(Encoding.ASCII.GetBytes("\x1c\x0d"));
    writer.Write(orderResultFormat);
    catch (Exception e)
    LoggingProvider.Logger.Exception("Unexpected exception while writting the response", e);
    client.Close();
    break;
    catch (Exception ex)
    LoggingProvider.Logger.Exception("Unexpected exception while Reading the request", ex);
    client.Close();
    break;
    }).Wait();
    public override bool OnStart()
    // Set the maximum number of concurrent connections
    ServicePointManager.DefaultConnectionLimit = 12;
    DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString");
    RoleEnvironment.Changing += RoleEnvironment_Changing;
    return base.OnStart();
    private void RoleEnvironment_Changing(object sender, RoleEnvironmentChangingEventArgs e)
    // If a configuration setting is changing
    LoggingProvider.Logger.Info("RoleEnvironment is changing....");
    if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))
    // Set e.Cancel to true to restart this role instance
    e.Cancel = true;
    As you can see, nothing special is being done. I've used the RoleEnvironment.CurrentRoleInstance.InstanceEndpoints to retrieve the current IPEndpoint.
    Running the Cloud Service in the Windows Azure Compute Emulator everything works fine, but when I deploy it in Azure, then I receive the following Exception:
    2014-08-06 14:55:23,816 [Role Start Thread] INFO EMAC Log - TcpListenerWorkerRole entry point called
    2014-08-06 14:55:24,145 [Role Start Thread] INFO EMAC Log - TcpListener started at '100.74.10.55:10100'
    2014-08-06 15:06:19,375 [9] INFO EMAC Log - Client connected. Address='196.3.50.254:51934'
    2014-08-06 15:06:19,375 [9] INFO EMAC Log - Buffer size: 65536
    2014-08-06 15:06:45,491 [9] FATAL EMAC Log - Unexpected exception while Reading the request
    System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
    at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
    --- End of inner exception stack trace ---
    at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
    at System.IO.StreamReader.ReadBuffer(Char[] userBuffer, Int32 userOffset, Int32 desiredChars, Boolean& readToUserBuffer)
    at System.IO.StreamReader.Read(Char[] buffer, Int32 index, Int32 count)
    at TcpListenerWorkerRole.WorkerRole.<>c__DisplayClass0.<<Run>b__2>d__0.MoveNext() in C:\Work\Own projects\EMAC\AzureCloudEmac\TcpListenerWorkerRole\WorkerRole.cs:line 60
    I've already tried to configure an internal port in the ServiceDefinition.csdef file, but I get the same exception there.
    As you can see, the client can connect to the service (the log shows the message: Client connected with the address) but when it tries to read the bytes from the stream, it throws the exception.
    For me it seems like Azure is preventing the retrieval of the message. I've tried to disable the Firewall in the VM in Azure and the same continues happening.
    I'm using Windows Azure SDK 2.3
    Any help will be very very welcome!
    Thanks in advance!
    Javier
    En caso de que la respuesta te sirva, porfavor, márcala como válida
    Muchas gracias y suerte!
    Javier Jiménez Roda
    Blog: http://jimenezroda.wordpress.com

    hi Javier,
    I changed your code like this:
    private AutoResetEvent connectionWaitHandle = new AutoResetEvent(false);
    public override void Run()
    TcpListener listener = null;
    try
    listener = new TcpListener(
    RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint"].IPEndpoint);
    listener.ExclusiveAddressUse = false;
    listener.Start();
    catch (SocketException se)
    return;
    while (true)
    IAsyncResult result = listener.BeginAcceptTcpClient(HandleAsyncConnection, listener);
    connectionWaitHandle.WaitOne();
    The HandleAsync method is your "While (true)" code:
    private void HandleAsyncConnection(IAsyncResult result)
    TcpListener listener = (TcpListener)result.AsyncState;
    TcpClient client = listener.EndAcceptTcpClient(result);
    connectionWaitHandle.Set();
    NetworkStream netStream = client.GetStream();
    StreamReader reader = new StreamReader(netStream);
    StreamWriter writer = new StreamWriter(netStream);
    writer.AutoFlush = true;
    string input = string.Empty;
    try
    char[] receivedChars = new char[client.ReceiveBufferSize];
    // LoggingProvider.Logger.Info("Buffer size: " + client.ReceiveBufferSize);
    int readedChars = reader.Read(receivedChars, 0, client.ReceiveBufferSize);
    char[] validChars = new char[readedChars];
    Array.ConstrainedCopy(receivedChars, 0, validChars, 0, readedChars);
    input = new string(validChars);
    // LoggingProvider.Logger.Info("This is what the host sent to you: " + input + ". Readed chars=" + readedChars);
    try
    string orderResultFormat = Encoding.ASCII.GetString(Encoding.ASCII.GetBytes("\xB")) + @"MSH|^~\&|Instrument|Laboratory|LIS|LIS Facility|20120427123212+0100||ORL^O34^ORL_O34| 11|P|2.5.1||||||UNICODE UTF-8|||LAB-28^IHE" + Environment.NewLine + "MSA|AA|10" + Environment.NewLine + @"PID|||patientId||""""||19700101|M" + Environment.NewLine + "SPM|1|sampleId&ROCHE||ORH^^HL70487|||||||P^^HL70369" + Environment.NewLine + "SAC|||sampleId" + Environment.NewLine + "ORC|OK|orderId|||SC||||20120427123212" + Encoding.ASCII.GetString(Encoding.ASCII.GetBytes("\x1c\x0d"));
    writer.Write(orderResultFormat);
    catch (Exception e)
    // LoggingProvider.Logger.Exception("Unexpected exception while writting the response", e);
    client.Close();
    catch (Exception ex)
    //LoggingProvider.Logger.Exception("Unexpected exception while Reading the request", ex);
    client.Close();
    Please try it. For this error message, I suggest you could refer to this thread (http://stackoverflow.com/questions/6173763/using-windows-azure-to-use-as-a-tcp-server
    ) and this post (http://stackoverflow.com/a/5420788).
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

Maybe you are looking for

  • How can I automatically import images into iBook Author?

    I'd like to make a photo book with a couple hundred photos. Aside from manually inserting each page, inserting each image, placing/resizing it on the page, is there an easier way to insert many images, and flow them one per page? thanks in advance

  • JTabbedPane with actionlistener?

    Hi. I need to make a JTabbedPane and have every tab listen to an action (mouse or otherwise). In general, i wrote methods to detach and reattach the pane, and they responde perfectly to a temprorary button I made. What i need to do now is detach a ta

  • My iPad2 will sometimes hang when loading an mp4. Gives an error message then attempts to resync which hangs until I unplug the iPad.

    My iPad2 will sometimes hang when loading an mp4. Gives an error message then attempts to resync which hangs until I unplug the iPad. I have an iPad 2and have not had any trouble until I up dated my os. I have a pc with win 7.  I do not sync since I

  • ACS SE: no clean shutdown / sometimes looses IP config

    Hello, we recently upgraded a ACS applicance from: Cisco Secure ACS: 3.2.3.11 Appliance Management Software: 3.2.3.11 Appliance Base Image: 3.2.2.2 Status: Appliance is functioning properly to: Cisco Secure ACS: 3.3.1.16 Appliance Management Software

  • ATG Search installation issue

    Hello all, I am installing ATG 10.0.0.2, CRS and ATG Search on windows 2003 server with weblogic and Oracle following Commerce Reference Store Installation and Configuration Guide. When deploy ATGPublishing.ear to weblogic, it always indicate online