Issue with Datasource

Hello BI Guru's
I have standard 3.x data source which is using currently in production system.
As per new requirement i have  enhanced the data source with new fieldas and existing data source using most of the fields,but for my requirement i need at least 30% of the fields from the data source.
If i am going to hide rest of the fields then current landscpe cubes and DSO will not get loaded with data.It will be impact on the cube and DSO
How would i go for it?
I have to create a new infosource as per my requirement and need to assign to existing data source ,where in transfer rules i need to map fields from data source to Info source as per my requirement.
What is my question is here,if i am going to create a new infosource  and assing it to existing data source, there won't be any issue right?
Please some one can help me out from this issue?.

Hey Aryan,
If you are working in BI7.0 (not emulated), you can map one DS to multiple info providers. In this case, there is no Infosource in the picture. If you are working in an emulated scenario, the restrictions in 3.5 still holds good. So, you can not map the same DS to multiple info sources.
Let me ask you one question. Let there be 300 info objects in the info source. For your new development, do you need to make changes to any of the existing info objects? If not, you are safe. Let us say you have 300 info objects and you add 10 more. OUt of the existing 300, you need only 80 info objects from the existing IS and 10 newly added info objects. You create a new cube/DSO with 90 fields, map the 80 from the old set of 300 and 10 newly enhanced info objects. I think that should work without disturbing the existing setup. Think about it.
Thanks and Regards
Subray Hegde

Similar Messages

  • Issue with Datasource in BPEL 11g

    Hi,
    I am getting the following error intermittently with the datasource when used in the BPEL process. I already had a look at the forum post A stale Connection Factory or Connection Handle may be used in SOA 11g
    The particular settings suggested in the above forum post is already done and still the issue persists. Is there any possibility that database is not allowing these many connections from their end? We have set the maxConnection pool size to 1000 and also tried with modifying the size to 2000.
    any pointers for debugging this issue would be appreciated.
    Exception occured when binding was invoked.
    Exception occured during invocation of JCA binding: "JCA Binding execute of Reference operation 'FetchCustomFenceRingsSelect' failed due to: JCA Binding Component connection issue.
    JCA Binding Component is unable to create an outbound JCA (CCI) connection.
    +EnrichSubscriptionRequestComposite:FetchCustomFenceRings [ FetchCustomFenceRings_ptt::FetchCustomFenceRingsSelect(FetchCustomFenceRingsSelect_inputParameters,FugroringsCollection) ] : The JCA Binding Component was unable to establish an outbound JCA CCI connection due to the following issue: javax.resource.spi.IllegalStateException: [Connector:199176]Unable to execute allocateConnection(...) on ConnectionManager. A stale Connection Factory or Connection Handle may be used. The connection pool associated with it has already been destroyed. Try to re-lookup Connection Factory eis/DB/test3 from JNDI and get a new Connection Handle.+
    Please make sure that the JCA connection factory and any dependent connection factories have been configured with a sufficient limit for max connections. Please also make sure that the physical connection to the backend EIS is available and the backend itself is accepting connections.
    +".+
    +The invoked JCA adapter raised a resource exception.+
    +Please examine the above error message carefully to determine a resolution.+
    Thanks!!

    Hi,
    Do you see its an issue with availbility of connections in the connection pool? the connection pool maximum size is 2000 now and i think its very huge number.
    In our BPEL component we have 8 to 9 db adapter calls and number of concurrent requests are very low. So i don't see a reason that bpel component will consumer all these available datasources.
    Thanks.

  • Create datasource issue with Planning 9.3.1

    The plan is to upgrade to 9.3.3 so will need 9.3.1 first.
    I have installed 9.3.1 Planning on win 2003 server and finished the configuration steps as well.
    All my hyperion planning databases are on a Oracle RAC system 11g installed on Linux.
    The individual SIDs are hyp1 and hyp2 and we use HYP to connect.
    I used HYP as SID for all other configuration steps and was successful but when I try to use HYP as SID in the create datasource process I get the error " Relational Database Connection Failed:= :[Hyperion][Oracle JDBC Driver][Oracle]ORA-12505 Connection refused, the specified SID (HYP) was not recognized by the Oracle server."
    If I use either HYP1 or HYP2 it works fine but does not work with HYP.
    Is this an issue with 9.3.1 or will this continue to be an issue with 9.3.3 as well? Any suggestions to use HYP as my SID instead of using HYP1 or HYP2?
    Thank you.

    That article didnt mention planning but I wonder if it is possible to create a datasource using the standard method pointing to one db instance
    Then go into HSPSYS_DATASOURCE and update rdb_server_url to something like (it will need updating to the correct path)
    jdbc:hyperion:oracle:TNSNamesFile=D:\oracle\\product\\10.2.0\\db_1\\NETWORK\\ADMIN\\tnsnames.ora;TNSServerName=HYP
    or may be worth having a look at http://www.datadirect.com/resources/jdbc/oracle-rac/connecting.html
    as 9.3.1 uses datadirect jdbc drivers.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Issue with Analysis Office Add in

    Hi,
    User are having an issue with Analysis Office and giving the error.
    From Analysis Office-> Open workbook->after Login to AO using BW connection
    After opened the report ->when refresh the report, getting the error
    "An Exception occurred in one of the data sources. SAP BI Add in has disconnected
    Nested Exception. See inner exception below for more details
    Initial RANGE-LOW for customer exit variable ****_EXIT_001 corrected ..
    Under details
    An exception has occurred in one of the data sources.
    SAP BI Add-in has disconnected all data sources.(ID-111007)
    We are using BO AO for MS Office Add-in 1.4 SP3 in BO server.
    Please let me know the reason for this error and how to fix this.
    Thanks in advance.
    Jayakrishna

    Hi Krishna,
               Thanks for the reply.  The Sales office field is directly mapped  in the transformation and does not have any routine. Its the Key field.
    The Billing Document Condition infocube is being feed by the DSO '2LIS_13_VDKON - Billing Document Condn' and datasource is '2LIS_13_VDKON'.
    The Open Orders infocube is being feed by the DSO Document Order item / Delivery;  below which we have another 3 DSO.
    1st DSO has the Datasource '2LIS_13_VDITM'
    2nd DSO has the Datasource '2LIS_11_VAITM'
    3rd DSO has the Datasource '2LIS_11_V_SSL'
    The Sales Office 7 has txn records for the month of April & May.
    The report built on top of a Multiprovider and the for the months June and July, we have txn records fine for the sales office 01 - 06.
    Please help me, if i am missing anything here and make me to understand better.

  • Issues with JDBC Connection Pooling

    Hi all,
    I'm experiencing some unexpected behaviour when trying to use JDBC Connection Pooling with my BC4J applications.
    The configuraiton is -
    Web Application using BC4J in local mode
    Using Default Connection Stagegy
    Stateless Release Mode
    Retrieving Application Modules using Configuration.createRootApplicationModule( am , cf );
    Returning Application Modules using Configuration.releaseRootApplicationModule( am, false );
    Three application modules
    AppModuleA - connects to DatabaseConnection1
    AppModuleB - connects to DatabaseConnection2
    AppModuleC - connects to DatabaseConnection2
    My requirement is to -
    Use App Module Pooling and have individual pool for each Application Module
    Use JDBC Pooling and have individual pool for each Database connection
    Note: All configuration was achieved in design mode (i.e. right clicking AppModule->Configurations...)
    1. Initial approach -
    In the configuration for each Application Module I specified the connection type as 'JDBC Datasource' and specified to approriate datasource.
    Tried setting doConnecitonPooling to 'true' as well as 'false'
    In the data-sources.xml I specified all the appropriate info including min-connections and max-connections.
    I would expect, with the above config that BC4J would use OC4J's built in JDBC connection pooling.
    2. Second approach -
    In the configuration for each Application Module I specified the connection type as JDBC URL.
    In the configuration I specified doConnectionPooling = 'true' as well as the max connection, max available and min available
    What I experienced in both cases was that the max connections seem to be ignored as the number of connection as reported by the database (v$session) was exceeded by more than 10.
    In addition to this once the load was removed the number of JDBC connecitons did not drop (I would have expected it to drop to max available connections)
    My questions are -
    1. When specifying to use a 'JDBC Datasource' style of connection, is it in fact OC4J that is then responsible for pooling JDBC connections? And in this case should BC4J's doConnectionPooling parameter be set to true or false?
    2. Are there any known issues with the use of the JDBC Conneciton Pool as stated by the above to approaches?

    Thanks for the additional info. Please see my comments. below.
    Sorry should have been more specififc -
    1. Is each application pool using a different JDBC user? You mentioned DatabaseConnection1 and DatabaseConnection2
    above; are these connections to different schemas / users? If so, BC4J will create a separate connection pool for each
    JDBC user. Each connection pool will have its own maximum pool size.
    Each 'DatabaseConnection' refers to a different database, actually hosted on a seperate physical server, different
    schema and different user.BC4J will maintain a separate connection pool for each permutation of JDBC URL / schema. If each user is connecting
    to a different DB instance then I would expect no greater than 10 DB sessions. However, if a DB instance is hosting
    more than user then I would expect greater than 10 DB sessions (though still no more than 10 DB sessions per user).
    2. Are all the v$session sessions related to the JDBC clients? There should be at least one additional database
    session which will be related to the session that is querying v$session.
    When querying the v$session table I specifically look for connections from the user in quesiton and from the machine
    name in question and in doing so eliminate the database system's connections, as well as the query tools'
    connection. One area I'm not sure about is the connection BC4J uses to write to its temporary tables. I am using
    Stateless release mode and have not explicetly stated to save to the database but I'm wondering if it still does if so
    and how does it come into the equation with max connections?BC4J's internal connections are also pooled and the limits apply as mentioned above. So, if you have specified
    internal connection info for a schema which is different than the users above I would expect the additional conns.
    One helpful diagnostic tool, albeit programmatic, might be to print the information about the connection pools after
    your test client(s) have finished. This may be accomplished as follows:
    // get a reference to the BC4J connection pool manager
    import oracle.jbo.server.ConnectionPoolManagerFactory;
    import oracle.jbo.server.ConnectionPoolManagerImpl;
    import oracle.jbo.pool.ResourcePool;
    import java.io.PrintWriter;
    import java.util.Enumeration;
    // get the ConnectionPoolManager. assume that it is an instance of the supplied manager
    ConnectionPoolManagerImpl mgr = (ConnectionPoolManagerImpl)ConnectionPoolManagerFactory.getConnectionPoolManager();
    Enumeration keys = mgr.getResourcePoolKeys();
    PrintWriter pw = new PrintWriter(System.out, true);
    while (keys.hasMoreElements())
    Object key = keys.nextElement();
    ResourcePool pool = (ResourcePool)mgr.getResourcePool(key);
    System.out.println("Dumping pool statistics for pool: " + key);
    pool.dumpPoolStatistics(pw);
    }

  • Performance issues with 0CO_OM_WBS_1

    We use BW3.5 & R/3 4.7 and encounter huge performance issues with 0CO_OM_WBS_1? Always having to do a full load involving approx 15M records even though there are on the average 100k new records since previous load. This takes a longtime.
    Is there a way to delta-enable this datasource?

    Hi,
    This DS is not delta enabled and you can only do a full load.  For a delta enabled one, you need to use 0CO_OM_WBS_6.  This works as other Financials extractors, as it has a safety delta (configurable, default 2 hours, in table BWOM_SETTINGS).
    What you should do is maybe, use the WBS_6 as a delta and only extract full loads for WBS_1 for shorter durations.
    As you must have an ODS for WBS_1 at the first stage, I would suggest do a full load only for posting periods that are open.  This will reduce the data load.
    You may also look at creating your own generic data source with delta; if you are clear on the tables and logic used.
    cheers...

  • Performance issues with SAP BPC 7.0/7.5 (SP06, 07, 08) NW

    Hi Experts
    There are some performance issues with SAP BPC 7.5/7.0 NW, users are saying they are not getting data or there are some issues while getting data from R/3 system or ECC 6.0. Then what things do I need to consider to check, such as what DataSources or Cubes I need to check? So, how to solve this issue?
    What things I need to consider for SAP NW BI 7.0 u2013 SAP BPC 7.5 NW (SP06, 07, 08) Implementation?
    Your help is greatly appreciated.
    Regards,
    Qadeer

    Hi,
    New  SP was released in February, and now most of the new bugs should been caught ,This has a Central Note. For SP06 it's Note 1527325 - Planning and Consolidation 7.5 SP06 NetWeaver Central Note to fix any issues. Most of the improvements in SP06 were related to performance, especially when logging on from the BPC clients.There you should be able to find a big list of fixes/improvements and Notes that describe those. Some of the Notes even have test description how to reproduce that issue in the old version.
    hope this will help you
    Regards
    Rv

  • Crystal Reports XI R2 SP 6 - Issue with setting Data Source Location

    Hello,
      After some initial difficulty instally CR XI R2 with SP 6 on a windows XP machine (see thread Error on installation of CR XI R2 SP6), the Crystal Reports environment seemed fine, however, when I tried resetting the datasource location on an ODBC datasource between a development and production server, I get the message 'Some Tables could not be replaced as no match was found....'   The tables in the two databases are identical so that isn't the issue.
    Here is some additional information
    The issue seems related only to DSN's that point to a progress database.  I am able to reset datasource locations for DSN's that use the SQL server driver and also for those that use the XML CR ODBC XML Driver 5.0.  I am not able to reset the datasource locations on the DSN's that use a Progress Open Edge 10.1 DB  driver.
    I can create a new report using the DSN for the Progress Driver and add tables, but the table names are coming up as an alias - i.e. if I add a table called PM_Plant, the table added to the report is PM_Plant1.
    I also found I can go into existing reports, rename the tables in the database expert to be an alias (appending 1 to the end of the table name), then I am able to repoint them using the datasource location screen. 
    So it looks like there is a potential work around to the situation, but I didn't run across any information that we should need to do that. 
    Any recommendations how to fix the issue?  
    Thanks,

    Hi Don,
    The reports were created with CR XI R1 on my PC  initially and the progress drivers have not changed since.   The reports were deployed to a server and I pulled them back to my PC to test out any changes after the CR XI R2 SP 6 upgrade.  (so there is really only one machine involved, the one that had the upgrade). 
    I did look at the settings for verifying and tried playing around with those and also with verifying the database but that didn't make any difference.
    I wasn't quite sure which registry keys to look at or what the values should be so wasn't able to pursue that option. 
    All the tables in the progress database use underscores as part of the table name (i.e. PM_Plant, PM_Company), do you think the upgrade to SP6 means that the underscore is now a reserved character and that is why the tables are getting aliased?  If so, do you know how to change the alias settings or the list of characters that are reserved?
    Just an FYI, I had to downgrade back to CR XI R1 at this point to get work done.   If time allows, I'll retry the upgrade in about 5-7 weeks.  I discussed the issue with a system admininstrator and we willl try removing the progress drivers and DSN's prior to trying an upgrade again to see if that makes a difference.  I'll also make sure to keep track of all Report Options and Options and also registry keys to see what changes.
    Thanks

  • Issue with 0hrposition master data

    We are extracting data from SAP using 0HRPOSITION_ATTR datasource. I noticed that data is not maintained correctly in master data tables in BW and giving us incorrect results in reporting. Consider below mentioned scenario:
    Position A created as vacant on 04/01/2006 with start date (BEGDA/ Valid from) as 04/01/2006 and End date (ENDDA/ Valid to) as 12/31/9999 and is vacant. Below mentioned entry is shown under maintain master data for 0HRPOSITION in BW:
    Position Valid To Valid From Position Vacant
    A 03/31/2006 01/01/1000
    A 12/31/9999 04/01/2006 X
    Position A is now delimited on 09/15/2006 as it’s no more required. In SAP, position has record only from 04/01/2006 till 09/15/2006 as vacant. When record is extracted in BW, it creates below mentioned entry in master data table.
    Position Valid To Valid From Position Vacant
    A 03/31/2006 01/01/1000
    A 09/15/2006 04/01/2006 X
    <b>A 12/31/9999 09/16/2006 X</b>
    Entry 09/16- 12/31 is incorrect as position doesn’t exist for this duration. If we report on 0HRPOSTION with key date as 09/30/2006, it shows position A as vacant though position no longer exists.
    Has anyone come across this situation. any help is greatly appreciated.
    Kamal
    P.S: Milind Rane...I was searching through the forums and came across your post.I would appreciate if you could let me know how you solved this issue...
    Message was edited by:
            Kamal K

    HI KK
    I have a similar issue. Can you please let me know how this Issue with 0HRPOSITION_ATTR extractor was resolved. In my case i have incorrect data in BW when reporting is done. THrough the extractor untill the PSA i have the correct records coming  but in the master data 0HRPOSITION i have incorrect records.
    Please help.
    Thanks
    Hari
    Message was edited by:
            SAPCOOL

  • Issue with 0HRPOSITION_ATTR extractor

    We are live with BW HR application and having issue with 0HRPOSITION master data. We are extracting data from SAP using 0HRPOSITION_ATTR datasource. I noticed that data is not maintained correctly in master data tables in BW and giving us incorrect results in reporting. Consider below mentioned scenario:
    Position A created as vacant on 04/01/2006 with start date (BEGDA/ Valid from) as 04/01/2006 and End date (ENDDA/ Valid to) as 12/31/9999 and is vacant. Below mentioned entry is shown under maintain master data for 0HRPOSITION in BW:
    Position Valid To   Valid From  Position Vacant
    A        03/31/2006 01/01/1000
    A        12/31/9999 04/01/2006        X
    Position A is now delimited on 09/15/2006 as it’s no more required. In SAP, position has record only from 04/01/2006 till 09/15/2006 as vacant. When record is extracted in BW, it creates below mentioned entry in master data table.
    Position  Valid To    Valid From  Position Vacant
    A         03/31/2006  01/01/1000
    A         09/15/2006  04/01/2006        X
    A         12/31/9999  09/16/2006        X
    Entry 09/16- 12/31 is incorrect as position doesn’t exist for this duration. If we report on 0HRPOSTION with key date as 09/30/2006, it shows position A as vacant though position no longer exists.
    Has anyone come across this situation. any help is greatly appreciated.
    Thanks,
    Milind

    Hi
    I have nt worked with this data source.
    1)Is your data source delta enabled?
    2)Try running RSA3 and check for record which is not coming fine. If in RSA3 record is fine then check calculations(if any ) in update/transfer rule.
    Let me know if it helps..
    Sorabh
    Assign point if it helps**
    Message was edited by: Sorabh Arora

  • Issue with Bex report - key Figures not populating correctly.

    Hi Experts,
    I am facing an issue with a Bex report. There are three key figures of data type DATE having "Dec - Counter or amount field with comma and sign" datatype. After executing the query, in the report, for some sales documents the key figure fields are coming as 'X' and rest others are coming correctly in the date format as mm\dd\yyyy.
    When i check in the cube, these key figures shows values in decimal format and not in date format. The conversion is happening during execution.
    Pleas show me some light on how to identify the cause of getting 'X' for some Sales documents in the report eventhough some are coming correctly.
    Thanks,
    Anamika
    Edited by: Anamika Soni on Mar 12, 2010 10:48 AM

    Hi,
    The infobject has been defined with datatype "DATE" only. It has also been mapped from datasource to infocube correctly as these key figures are populating correctly for some of the sales documents in the report. It is not like that for all the sales documents the key figures are not converting properly.
    Some Sales documents are not converting into the date fields but for some conversion happens properly in the BEx report. This is the issue.
    Please guide accordingly.
    Many thanks,
    Anamika

  • Few issues with BI 2004s

    Hello there,
    I am having few issues with BI 2004s system. It’s a new implementation.
    -         When you create Data source using DB connect in new system, it creates with old version. I could see the SMALL square box next to the Data source. I would expect the system to generate new 2004s datasource.
    -         Obviously the generated D/S won’t allow us to create New Transformations. We have to create old transfer rules/update rule. I do see the option to migrate the D/S when I right click on it. But I get short dump when I try that and then you have to discard the D/S and create new one.
    -         Also, it won’t allow you to delete the DBConnect D/S although you have option to do that?
    Anyone has faced these issues?
    Performance issue with reporting:
    I understand that you have to have portal in order to execute reports which we don’t have at this client. They don’t have time and money to invest right now. So we have created reports using BW 3.5 BEx Q.designer/WAD to run the reports on web. The report takes long time when you try to drill down on Material/Customer with only 150,000 records in Cube. The cube has just three dimensions (two char each) and 4 key figures. Any suggestion to improve?
    Thanks
    Sudhakar

    Yes, I noticed it too and was thinking the same thing, blueaura.
    I'm sure they want to ensure the scan is reliable. I keep my screen at a lower brightness to save power, and sometimes Starbucks scanners take a bit longer for it to register.
    2) is already fixed.
    3) is a feature, not a bug. If you are updating an app you already have installed, it doesn't bother to ask you for the password. Nice!
    For Google maps, just use maps.google.com in the browser.

  • Issue with Sales office values in BI

    Hi Team,
         We have a issue with the sales office values in a BI report.
    The report displays 7 sales office values for the division 01. Where as in ECC, we have only 6 sales office values exists for the same division 01.
    These sales office 7 has been have been loaded on april n may months.
    The master data has all the sales office values 01- 10.
    Those sales office values are coming from the Infocubes, 'Billing Document Condition' & 'Open Orders'.
    I have the checked the Multiprovider, Infocubes and the InfoObject and values exists.
    How to proceed further and correct these values?
    Appreciate your expert guidance...
    Thanks
    Regards
    Santhosh Kumar N

    Hi Krishna,
               Thanks for the reply.  The Sales office field is directly mapped  in the transformation and does not have any routine. Its the Key field.
    The Billing Document Condition infocube is being feed by the DSO '2LIS_13_VDKON - Billing Document Condn' and datasource is '2LIS_13_VDKON'.
    The Open Orders infocube is being feed by the DSO Document Order item / Delivery;  below which we have another 3 DSO.
    1st DSO has the Datasource '2LIS_13_VDITM'
    2nd DSO has the Datasource '2LIS_11_VAITM'
    3rd DSO has the Datasource '2LIS_11_V_SSL'
    The Sales Office 7 has txn records for the month of April & May.
    The report built on top of a Multiprovider and the for the months June and July, we have txn records fine for the sales office 01 - 06.
    Please help me, if i am missing anything here and make me to understand better.

  • Weblogic 10.3.0 issues with remote object calls.

    All:
    I was wondering if anyone has experienced any issues with Weblogic 10.3.0 dropping initial remote object calls over AMF Secure Channel. Here are the issues we are experiencing.
    1.     FLEX applications fail consistently on the first remote object call made across the AMF Secure Channel. Resulting in the request not returning from the application server; which has had varying affects on the different applications including missing data, application freeze and general degrading of the user experience.
    2.     FLEX applications require a browser/application refresh once the application has been inactive for a certain period of time. In our experiences the behavior occurs after 30 minutes of inactivity.
    I've deployed this same code to Weblogic 10.3.3 and the behaviors go away. Are there any patches to 10.3.0 that might take care of this issue that we are not aware of?
    Thanks for you help,
    Mike

    Hello,
    I found the problem. But I needed change the target of all my datasources until discover that one of my datasource didn´t answer and no errors was trigged.
    My server was waiting this datasource, and not get started.

  • Network issue with initial connections on Hyper-V

    This is the opposite of all the other networking issues I have seen with Hyper-V- anyone run into this?
    I am running Hyper-V on a Server Core installation, and have had the same issue when I used Server 2012R2 as when I used Server 2008R2 on the same server as a virtual host. The server is an HP ProLiant DL360 Gen8 with 2x 8-core processors, 32 GB RAM. The
    VM is sized and configured to stay within 1 NUMA node. It uses a single port of the 4-port HP/Broadcom 331 NIC, with all the TCP LSO settings disable at the switch, host, and guest, same with the power management settings. We have ProxyARP on the network,
    so I have set the ArpRetryCount key to 0.
    I have seen it happening with and without SR-IOV enabled. The guest OS is an RDS server, so for the sake of our CAL's it has to stay at Server 2008R2. I have set up a HOSTS file to mitigate this problem, as it seems to be a DNS issue- Here's what happens:
    The first time I visit a resource, the server will timeout. An example not in our HOSTS file does this consistently: If I go to https://mail.mydomain.com/OutlookWebAccess, it spins for a couple of minutes before timing out. I hit refresh and the page loads
    immediately. This server is colocated in-house, as is the mailserver. I'd say it is a DNS issue at some level, but I'd like to know what level it is- Is it just timing out trying to cache the DNS request, or is there a performance issue with a DNS server on
    the network? (All of our DNS is AD-integrated running on DC's).
    Here is where I'm seeing this:
    Internet Explorer- described above
    Salient Interactive Miner- at the login screen, it searches for a database and times out. If I go to the setup and enter the IP address, it still times out. It will not find the database server until I quit and restart the program. If this is the same issue,
    it would obviously not be a DNS problem.
    I DON'T see it in GP9 connecting to a datasource
    Prior to setting up the HOSTS file, I saw this in Windows Explorer connecting to network shares on the same LAN segment.
    This appears to only happen to the Hyper-V guest OS. As a best practice, whenever Windows Update runs, I update the Integration Components.
    I have the bindings properly ordered on the guest VM, with the network adapter the goes to the physical NIC first, and IPv4 ordered before IPv6.
    All updates are isntalled, and the firmware is at it's latest revision.
    Any ideas?

    Hi Daniel,
    " This appears to only happen to the Hyper-V guest OS. "
    Do you mean that any other hosts can access the network resource normally  except these VMs ?
    If yes , my suggestion is to disable the VMQ on the physical NIC  for the VMs , then try test again .
    Best Regards
    Elton Ji
    If it is not the answer please unmark it to continue
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

Maybe you are looking for

  • Why some contact forms doesn't work on some websites? We don't receive the submission form with the information?

    Why some contact forms doesn't work on some websites? We don't receive the submission form with the information?

  • Error in promotion management tool in BI4.1 Sp1

    Hi   I am getting the following error when i try to add report folder objects using Promotion Management Tool in BI4.1 SP1. We are having Windows clustered environment. Can anybody help me to resolve this issue? HTTP Status 500 - java.lang.RuntimeExc

  • JSF in Weblogic Portal

    Hello Friends, I am using first time JSF portlets in weblogic portal 10.2 I created jsf page and generated portlet based on it. I added the portlet to my first page. When I try to access the portal app I get below error. I spent almost 3days on this

  • Connecting to windows PC

    I was in the process of restoring my used 20gb ipod when it it seemed like my computer froze. Now, I both my ipod won't recognize the USB cable (brand new) and the computer won't recognize ipod. I don't know whether it is charging because it seems th

  • Formatting An External DVD Burner

    I purchased an external DVD burner after being told it was compatible with a Mac. http://www.newegg.com/Product/Product.aspx?Item=N82E16827151194 It's recognized in Disk Utility, but when I try and burn a DVD, DU gives me a message that "Another appl