Multiple Deployment Targets and Performance

Hi,
We have several apps. deployed to a managed server. Then I was asked to create a second manageed server instance on the same host. I deployed all apps. to this new managed server (as well as keeping it deployed on the original managed server).
In looking at the performance monitor, it seems that the original managed server is taking most of the hit. Do I need to shutdown and restart the servers to see a more even distribution?
Thanks,
Paul

Just try it, it's easy enough and quicker than waiting for a response to your question. :)
Tell us if it worked though!

Similar Messages

  • What is the significance of the Assembly Deployment Target and Feature Scope?

    Hello,
    We have created a project as Farm Solution (not sandbox).
    Under project properties we can see an option for Assembly Deployment Target : GAC or WebApplication.
    I know that GAC will deploy the dll to windows\assemblies and WebApplication will deploy to website/bin folder.
    Now, we add a feature to the project; in here we can see a dropdown for scope with options as: Farm; Web; Site; WebApplication. I know that depending upon what option is chosen; the feature can then be activated by going to the respective manage features
    option via SharePoint front end UI.
    Can you tell me what is the significance of the Assembly Deployment Target and Feature Scope?

    >>Can you tell me what is the significance of the Assembly Deployment Target and Feature Scope?
    Feature scope can be decided based on the type of artifacts you will be building
    Check the below link for what type of artifacts can be build at different levels of scopes
    http://msdn.microsoft.com/en-us/library/ms454835(v=office.14).aspx
     wrt to Assembly Deployment Target depends on what type of components you are deploying through your solution which will be specified in manifest.xml. There are advantages
    if you deploy the components to web application rather GAC like all Apppools will not get recycled, avoiding IIS reset etc. But if you have 3rd party dlls and feature receivers which need to deployed along with the solution then its advisable to deploy to
    GAC
    http://trentacular.com/2009/06/sharepoint-differences-between-global-and-web-application-targeted-solution-deployment/
    https://www.captechconsulting.com/blog/james-brocato/sharepoint-without-the-gac
    Hope this helps.
    My Blog- http://www.sharepoint-journey.com|
    If a post answers your question, please click Mark As Answer on that post and Vote as Helpful

  • I have multiple apple ids was backing up everything to iCloud from iPhone 5. I just bought iPhone 6 and perform a iCloud restore. I received a message that all files did not restore.   How can I restore apps, files manually?

    I have multiple apple ids was backing up everything to iCloud from iPhone 5. I just bought iPhone 6 and perform a iCloud restore. I received a message that all files did not restore.   How can I restore apps, files manually?

    Yes - I connected my phone to my computer / Itunes and went into the apps section, but from there I have no idea how to manage the push notifications.  I even tryied going into itunes that is installed on my phone.  I still cannot find anyplace to manage these popups.  I have also gone into settings - notifiations - and tried turning all notifications for these apps all off but that didnt work either.  Any guidance is MUCH appreciated - Im not sure where to go from here.

  • One data source and assign multiple data target in BI?

    Hi all,
    Is it  possible is to assign one data source to multiple data targets in BI? Not like  in BW 3.X ; one data source can assign only one Info source. I am bit confused about it, let me know about this ?
    Regards.
    hari

    Okay, I must have misunderstood your message, I was thinking BI 7 and data targets .. like cubes and DSO's.
    In 3.x, assign your datasource to a single infosource.  Then assign that infosource to multiple data targets by creating update rules and assigning your infosource to each/all of them.
    In this way, you shouldn't need multiple infosources per datasource unless you have a special situation that calls for it.
    Brian

  • WSUS Updates to Available and Required collections - multiple deployment packages?

    Hi, I'm trying to follow various documents but cannot find a single unified methodology for how WSUS updates are now supposed to work
    I am creating software update Groups and trying to keep them under 1000 updates each as they take 2 hours or so.
    So i have a Pre2103, All2013, 2014January to May, 2014 June to September and starting from this month i will do one monthly
    My Client base is servers or workstations. I have 2 collections - Available (DCs SQl etc that need manual intervention) and Required (everything else)
    So i now create a deployment package for each group - but now i have to do that twice? Once for available collection and once for required collection? Am i missing an easier/better way of doing this?
    Once these have all been set up do i then have to deploy them with an ADR? Inject them into the Gold image of my workstations?
    I would like the internal WSUS to work exactly like (or as near as can be) to the way microsoft's works. EG if i take a vanilla windows 7 or 8 build and connect it to my network, it gets the GPO, installs the client then goes to SCCM and gets alll the updates
    to bring itself up to date without me having to then go back into SCCM and create a specifc deployment of job or to manually send out anything.
    Or will this only work when the collection gets updated and is aware of the new computer?
    Thanks

    OK i'm doing this now. As in the initial message i posted, everything up til 14th October is
    done and dusted. I have 4x Software Update groups. Pre2013, All2013,2014JantoMay and
    2104JunetoSept
    For each of these groups, i created a Folder under the root All Software Updates", filtered
    my searches and moved each grou of updates into its own folder for neatness, as well as a
    "Superseded and Expired" folder
    To deploy each of these i HAD to create a deployment package for each. Only done "Required"
    so far so i have 4 Deployment packages.
    So, now i am going to step through what i think i need to do to set up updates released on or
    after 14/10/2014
    I have no software update group or deployment package for October's releases and no ADRs at
    all.
    Just a list of 232 updates in All Software Updates. I deleted expired and superseded etc.
    In this view, there are 64 Assets total. 7 unknown!!? What does this even mean? How do i find
    out which are unknown? In the all device list, they are either clients or not. Nothing says
    unknown!! Anyway thats for later...
    I have 2x collections. WSUS-Available (x23) and WSUS-Required (x41)
    None of the 232 updates are downloaded and theres a mixture of required not required and
    percent compliant. .
    I have a workstation on the domain and have run a windows update (updates managed by my
    system administrator) and it says Up to date - and it takes ages.
    If i change this to "Check fromo Microsoft" i get 42 important updates and 34 optional
    updates available. This is my target scenario to hit from SCCM WSUS
    OK here goes.
    I select all 232 updates and move them to a folder called 2014October
    i go to Automatic Deployment Rules and Create a new one called WSUS2014Oct-Required
    select Patch Tuesday Template
    Collection is WSUS-Required
    create a new software update group
    Auto Deploy
    Released last 2 weeks - every update classification
    Run on a schedule - every second tuesday
    Client local time
    1hour / as soon as possible
    default/default/default
    now i need to create a new deployment package. Call it WSUS2014Oct
    Point it at sources and a folder called 2104October
    Now i choose Download software updates from a location on my network and select the local
    drive "E:\WSUS\WSUSContent" folder
    So now i have a "Required" ADR and i click Run Now
    BUT now i need to do one for the Available group as well. I notice that during the ADR i
    cannot select Required or Available.
    So i guess i need to suppress reboots on Servers then?
    Also if i am using the same deployment package, why do i need to reselct the download
    location again? its the same updates
    Also, now that i have 2x ADRs the first one i created creates the deployment package and the
    second ADR now points to that deployment package too so i now dont have separate packages to
    monitor
    Also, after running them both i get errors and no software update groups are created. So im
    guessing thats the problem as the error is 0x80070002 the file you specificed could not be
    found
    also get error on the available group saying Auto Deployment Rule download failed
    This is ridiculous. Why does creating an ADR allow creation of a SUG if it doesnt work.
    Very frustrated

  • Is it possible to deploy SharePoint or its Service Applications on: multiple DB-Servers and multiple SQL Instances?

    Hello Forum,
    We have a SharePoint 2013 farm (Enterprise edition) that uses one single SQL Server 2012 (Standard edition). That statement means: All my SharePoint DBs e.g. (Config, Admin, Content, and Service Apps) DBs are hosted and running onto one single instance e.g.
    Server1\SQLInstance1.
    We have some new requirements to install and configure BI tools such as: PerformancePoint services and PowerPivot. BI tools require either SQL Server 2012 Enterprise or BI editions, and we do NOT want to upgrade our current SQL Server1\SQLInstance1
    Instead, We have other separate SQL Server instance which is enterprise edition let's name it (ServerX\InstanceX) that is running standalone, and we are thinking or using it, and my 2 questions are:
    1) Can we use this other separate
    SQL Server instance which is enterprise edition to host the create and hosts the DBs of PerformancePoint services and PowerPivot ?
    2) My second question is the same: Can I create PerformancePoint services application in my SharePoint farm, But in the Database Server field, I fill up
    the details of the other DB server ServerX\InstanceX  which is the one that is SQL
    enterprise edition ? Will this work ?
    Any official Microsoft resources/links tell that it is possible to deploy SharePoint or its service applications on multiple DB-Servers and multiple SQL Instances?

    Thank you Alex and Anil,
    What are the ramifications of that?
    I mean, Assuming that I have created such a farm where most of SarePoint DBs in Standard SQL instance while the PerformancePoint service application and others e.g. PowerPivot and reporting service are deployed and configured onto other Enterprise SQL instance.
    Are there any recommendations or concerns that you would like to draw my attention to ?

  • Multiple FPGA targets under one cRIO controller

    Hi !
    I was reading the cRIO System Configuration Information (CRI) Reference Library article (http://www.ni.com/example/51852/en/) and there Figure 9 shows a cRIO Controller with multiple FPGA targets. How can this be accomplished?
    In my case, when I tried to add a 2nd FPGA target, under my cRIO-9076, I get a message that only one can be associated with the controller. 
    Any ideas ?
    Solved!
    Go to Solution.

    The CRI Library claims support back to LabVIEW 8.5.1, which leads me to believe this screenshot was taken in that version. The RIO Scan Interface/FPGA Scan Engine (RSI) were introduced in LabVIEW 8.6 and NI-RIO 3.0.x. In order to include this support, the notion of a chassis in the LV project was introduced (notice there is no chassis under the controller in the screenshot). To better facilitate RSI and the Scan Engine and provide a more accurate representation of what is actually available in a system, you can only add one chassis per controller. This allows the RSI to load the correct controllers for deployment.
    In LV 8.5.1, you can add multiple targets to an integrated controller/FPGA system (like the cRIO-9072) even though there is no way that could happen in real life, so this isn't really that desirable. What you can still do is add multiple FPGA targets (even from cRIO chassis) under the My Computer target in your project. This will still allow you to communicate with the FPGA target, but any VIs will be running on your PC system, not the cRIO controller.
    Donovan

  • Enforcement States for multiple deployment ID's

    I would like to have a report for the enforcement states of multiple deployment ID's. I have tried manipulating the default "States 1 - Enforcement states for a deployment" to have multiple default values but have not succeeded in getting
    the report to run.
    In our Software Updates we have multiple collections targeting specific groups of computers and then we have specific update groups within specific date ranges deployed to those collections. In some cases I have multiple deployments targeting the same collection
    and thus the want to have a single report for the enforcement status of multiple deployment ID's.
    Unfortunatley my level of SQL reporting is minimal, does anyone have knowledge of a report or query to use multiple deployment ID's for returning the enforcement states?

    Hi,
    You may have a look on the following blog, hope this could help you edit your report.
    http://blogs.msdn.com/b/steverac/archive/2013/01/13/modifying-a-report-to-merge-software-update-deployments-with-updates-delivered-through-standard-software-distribution.aspx
    Best Regards,
    Joyce
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • One off deployment target in ATG 10 Merch-Ui

    Hi All,
    We have migrated our application from ATG 9.3 to ATG 10.2
    We had configured One-Off deployment targets in the Merchandising landing page but after migrating our application to 10.2 we can no longer see One-Off deployment targets in the Merchandising landing page.
    Can you please let us know how can we configure the same in ATG 10.2?
    Moreover, we had created additional custom tabs for certain set of assets (for ex: products) to be accessible via Asset editor page but the custom tabs are also not available in the asset editor page.
    We can see that the viewMappings have been configured in the ViewMappingRepository but the custom tabs are not visible in the UI.
    Is there any other customization we need to perform for above mentioned issues?
    Can you please let me know incase anyone has faced such issues and what could be the possible resolution for it?
    Thanks in advance\
    Gaurav

    Thanks for the reply Gareth,
    I went through the doc mentioned and it clearly says that "One-off deployments can be launched at any time from an ATG Content Administration project".
    But as per our project requirement we had drop-down for One-Off deployment target in ATG 9.3 Merchandising landing page itself.
    Same requirement is to be met in ATG 10.2, however One-Off deployment target drop-down is not available in Flex-UI based Merchandising landing page.
    And in the Constraints section following is mentioned:-
    • A one-off target site is not available for deployment assignments in a workflow. In order to make that site available for workflow deployments, you must delete the target and recreate it.
    So if follow this step and make site available for workflow deployment, it would defeat our purpose of deploying the project to One-Off deployment target.
    I hope I have cleared my point.
    Do let me know incase you can throw some light on issues mentioned above.
    Thanks\
    Gaurav

  • Cache and performance issue in browsing SSAS cube using Excel for first time

    Hello Group Members,
    I am facing a cache and performance issue for the first time, when I try to open a SSAS cube connection using Excel (using Data tab  -> From Other Sources --> From Analysis Services) after daily cube refresh. In end users
    system (8 GB RAM), for the first time, it takes 10 minutes to open the cube. From next run onwards, its open up quickly within 10 secs.
    We have daily ETL process running in high end servers. The configuration of dedicated SSAS cube server is 8 core, 64GB RAM. In total we have 4 cubes - out of which for 3 is full cube refresh and 1 is incremental refresh. We have seen after
    daily cube refresh, it takes 10 odd minutes to open the cube in end users system. From next time onwards, it opens up really fast with 10 secs. After cube refresh, in server systems (16 GB RAM), it takes 2 odd minutes to open the cube.
    Is there, any way we could reduce the time taken for first attempt ?
    Best Regards, Arka Mitra.

    Thanks Richard and Charlie,
    We have implemented the solution/suggestions in our DEV environment and we have seen a definite improvement. We are waiting this to be deployed in UAT environment to note down the actual performance and time improvement while browsing the cube for the
    first time after daily cube refresh.
    Guys,
    This is what we have done:
    We have 4 cube databases and each cube db has 1-8 cubes.
    1. We are doing daily cube refresh using SQL jobs as follows:
    <Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
    <Parallel>
    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">
    <Object>
    <DatabaseID>FINANCE CUBES</DatabaseID>
    </Object>
    <Type>ProcessFull</Type>
    <WriteBackTableCreation>UseExisting</WriteBackTableCreation>
    </Process>
    </Parallel>
    </Batch>
    2. Next we are creating a separate SQL job (Cache Warming - Profitability Analysis) for cube cache warming for each single cube in each cube db like:
    CREATE CACHE FOR [Profit Analysis] AS
    {[Measures].members}
    *[TIME].[FINANCIAL QUARTER].[FINANCIAL QUARTER]
    3. Finally after each cube refresh step, we are creating a new step of type T-SQL where we are calling these individual steps:
    EXEC dbo.sp_start_job N'Cache Warming - Profit Analysis';
    GO
    I will update the post after I receive the actual im[provement from UAT/ Production environment.
    Best Regards, Arka Mitra.

  • Not able to deploy HumanTask and Decision service using ant file

    Hi all,
    I am using ant file to complile and deploy a bpel process. Currently i am able to compile the bpel process and deploy it. But when it come to deploying humantask and decesion services I am getting a message.
    [deployDecisionServices] There are no decision services to deploy
    [deployTaskForm] There are no forms to deploy
    Kindly guide me out of this issue
    below is my ant file
    <?xml version="1.0" encoding="iso-8859-1"?>
    <project name="bpel.deploy" default="AuthorizeCreditRequestApproval" basedir=".">
    <property name="process.dir" value="..\SVNCopy\Code}"/>
    <!-- Set BPEL process names -->
    <xmlproperty file="${process.dir}\AuthorizeCreditAS\bpel\bpel.xml"/>
    <property environment="env"/>
    <!-- Set bpel.home from developer prompt's environment variable BPEL_HOME -->
    <condition property="bpel.home" value="${env.BPEL_HOME}">
    <available file="ant-orabpel.xml"/>
    </condition>
    <!-- If bpel.home is not yet using env.BPEL_HOME, set it for JDev -->
    <property name="bpel.home" value="${oracle.home}/integration/bpel"/>
    <!-- First override from build.properties in process.dir, if available -->
    <property file="build.properties"/>
    <!-- import custom ant tasks for the BPEL PM -->
    <import file="ant-orabpel.xml"/>
    <!-- Use deployment related default properties -->
    <property file="ant-orabpel.properties"/>
    <target name="AuthorizeCreditRequestApproval" depends="CheckCustomerMarketSegment">
    <echo>
    | Deploying workflow form for AuthorizeCredit on ${j2ee.hostname}, port ${http.port}
    </echo>
    <deployTaskForm
    platform="${platform}" dir="../SVNCopy/Code/ManageCustomerCreditEligibility/AuthorizeCreditBS/public_html"
    domain="${domain}" process= "AuthorizeCreditBS" rev="${rev}"
    user="${admin.user}" password="${admin.password}"
    hostname="${j2ee.hostname}" cluster="${cluster}"
    rmiport="${rmi.port}" opmnrequestport="${opmn.requestport}"
    oc4jinstancename="${oc4jinstancename}"
    asinstancename="${asinstancename}" verbose="${verbose}"
    />
    </target>
    <target name="CheckCustomerMarketSegment" depends="compileAuthorizeCredit">
    <echo>
    | Deploying decision services for AuthorizeCredit on ${j2ee.hostname}, port ${http.port}
    </echo>
    <deployDecisionServices
    platform="${platform}" dir="../SVNCopy/Code/AuthorizeCredit/decisionservices"
    domain="${domain}" process="AuthorizeCredit" rev="${rev}"
    user="${admin.user}" password="${admin.password}"
    hostname="${j2ee.hostname}" httpport="${http.port}"
         cluster="${cluster}" rmiport="${rmi.port}"
    opmnrequestport="${opmn.requestport}" oc4jinstancename="${oc4jinstancename}"
    asinstancename="${asinstancename}" verbose="${verbose}"
    />
    </target>
    <target name="compileAuthorizeCredit" depends="checkout">
    <echo> --- | Compiling bpel process AuthorizeCreditBS| ---
    </echo>
    <bpelc input="../SVNCopy/Code/AuthorizeCredit/bpel/bpel.xml"
    out="../SVNCopy/Code/AuthorizeCredit/output" rev="${rev}"
    home="${bpel.home}"/>
    <echo> --- | AuthorizeCredit Compiled Successfully, revision ${rev}| ---
    </echo>
    </target>
    </project>
    Edited by: Arun Vikram on Mar 4, 2010 10:45 PM

    Is there any workflow or decision service in ur bpel???
    if present,...the ant is not getting ../SVNCopy/Code/AuthorizeCredit/decisionservices and ../SVNCopy/Code/ManageCustomerCreditEligibility/AuthorizeCreditBS/public_html folders.
    Edited by: krish.chaitnya on Mar 4, 2010 11:20 PM

  • Urgent : OBIA - Handling multiple Global, Local and Document Currencies

    All,
    I need input on how to configure multiple currency codes in DAC for OBIA 7.9.6.x
    My client's business goes around the the world and with multiple local currencies.They also need reporting currency to be multiple like in Euro, USD etc.
    I went through a couple of threads in this forum,like -
    Re: Configuring Global Currencies in BI Apps 7.9.6 for EBS 11.5.10 Source
    Re: How we are using Global, Local and Document Currencies  in DAC.
    But I have some confusion regarding the configuration of DAC's 3 Global currency codes.
    In order to fulfill my requirement - should I add additional codes in DAC?
    Basically How am I supposed to handle multiple Local,Document and Global currencies? And then possible changes in rpd/reports required also?
    Regards,
    Krish

    Currencies are discussed in the Setup and Configuration Guide here:
    7.1.1.2 How to Configure Global Currencies
    To configure the global currencies you want to report in your warehouse:
    In the DAC Client, display the Design view.
    For more information about logging into the DAC, see Section A.1, "How to Log Into the DAC Client".
    Select a container from the drop down list to the right of the Execute button.
    Display the 'Source System Parameters' tab.
    Locate the following parameters and set the currency code values for them in the 'Value' box:
    $$GLOBAL1_CURR_CODE (for the document currency).
    $$GLOBAL2_CURR_CODE (for the local currency).
    $$GLOBAL3_CURR_CODE (for the global currency).
    Make sure that you spell the currencies as they are spelled in your source OLTP system.
    Save your changes.
    As far as for PLP items, those are Post Load Processing elements that perform cleanup tasks after the Base Warehouse tables have been loaded. They should never be modified and frankly run without any issues provided the rest of the plan executes properly. Do you have a specific quesiton about a PLP?

  • Installing multiple Deployment Agents ... or vNext?

    As I'm expanding my use of RM, I'm running into a need to reuse target machines for both my 'RM-development/release' testing, and 'RM-real-internal-code-releases'.  My issue is that the deployment agent on the targeted server is configured to receive
    from only one RM server.
    I'm considering installing a second Deployment instance and configuring it to the separate RM server.  My concerns are:
    - can two deployment agents co-exist on the same server?
    - Is this just a bad idea in general, since I will have to install the second deployment agent on 4-6+ servers, to provide a full testing suite.
    I understand the RM strategy is to go agentless, and that vNext is agentless.  Should I switch to vNext?  Is it stable enough?
    thx
    Curt Zarger [email protected]

    John and Graham,
    thx for your responses.
    John: No 'developer tools' can be installed on the target machines under STIGS government security controls (http://iase.disa.mil/stigs).
    Graham: The following is my simplified tactical plan.  My strategic plan is to jump to 2015 and go full continuous delivery.
    These are the step I went thru.  Hopefully they will be of use to someone else.
    To overcome the STIGS restrictions on installation of developer tools, I tried to replace the files reported missing the the RM log
    (TFSBuild.exe, Microsoft.TeamFoundation.Client, etc) that support running a TFSBuild via command line.  No good.
    Since I had to execute on a machine with VS installed, I changed my RM stage environments' target machines to be my TeamBuild build agent machines.  This executes my build, including my MSBuild/Powershell scripts.  Since my scripts already handle
    the remoting of files and configurations to the intended target machine, it works.
    The drawbacks are 1. no triggering of RM from a TeamBuild, due to the mismatched templates being used.  2. This configuration of the staging environments, ie. none configured to the actual target machines, leaves me no migration path for moving code
    over to RM to be executed.
    I've decided to live with #1 and address #2.  The resolution is to split each stage into two stages.  Using "Dev" as the first stage, I set up pre-Dev and Dev stages.  The pre-Dev stage runs the TFSBuild command on the TeamBuild
    agent.  The Dev stage is then available to run any additional code added or migrated.  This is a bit clunky and complicates the approval flow a bit, but...
    Curt
    Curt Zarger [email protected]

  • Which one is the best way to collect config and performance details in azure

    Hi ,
    I want to collect the information of both configuration and performance of cloud, virtual machine and web role .I am going to collect all these details using
    java.  so Please suggest which one is the best way. 
    1) REST API
    2) Azure SDK for java
    Regards
    Rathidevi
    rathidevi

    Hi,
    There are four main tasks to use Azure Diagnostics:
    Setup WAD
    Configuring data collection
    Instrumenting your code
    Viewing data
    The original Azure SDK 1.0 included functionality to collect diagnostics and store them in Azure storage collectively known as Azure Diagnostics (WAD). This software, built upon the Event Tracing for Windows (ETW) framework, fulfills two design requirements
    introduced by Azure scale-out architecture:
    Save diagnostic data that would be lost during a reimaging of the instance..
    Provide a central repository for diagnostics from multiple instances.
    After including Azure Diagnostics in the role (ServiceConfiguration.cscfg and ServiceDefinition.csdef), WAD collects diagnostic data from all the instances of that particular role. The diagnostic data can be used for debugging and troubleshooting, measuring
    performance, monitoring resource usage, traffic analysis and capacity planning, and auditing. Transfers to Azure storage account for persistence can either be scheduled or on-demand.
    To know more about Azure Diagnostics, please refer to the below article ( Section : Designing More Supportable Azure Services > Azure Diagnostics )
    https://msdn.microsoft.com/en-us/library/azure/hh771389.aspx?f=255&MSPPError=-2147217396
    https://msdn.microsoft.com/en-us/library/azure/dn186185.aspx
    https://msdn.microsoft.com/en-us/library/azure/gg433048.aspx
    Hope this helps !
    Regards,
    Sowmya

  • ASCII character/string processing and performance - char[] versus String?

    Hello everyone
    I am relative novice to Java, I have procedural C programming background.
    I am reading many very large (many GB) comma/double-quote separated ASCII CSV text files and performing various kinds of pre-processing on them, prior to loading into the database.
    I am using Java7 (the latest) and using NIO.2.
    The IO performance is fine.
    My question is regarding performance of using char[i] arrays versus Strings and StringBuilder classes using charAt() methods.
    I read a file, one line/record at a time and then I process it. The regex is not an option (too slow and can not handle all cases I need to cover).
    I noticed that accessing a single character of a given String (or StringBuilder too) class using String.charAt(i) methods is several times (5 times+?) slower than referring to a char of an array with index.
    My question: is this correct observation re charAt() versus char[i] performance difference or am I doing something wrong in case of a String class?
    What is the best way (performance) to process character strings inside Java if I need to process them one character at a time ?
    Is there another approach that I should consider?
    Many thanks in advance

    >
    Once I took that String.length() method out of the 'for loop' and used integer length local variable, as you have in your code, the performance is very close between array of char and String charAt() approaches.
    >
    You are still worrying about something that is irrevelant in the greater scheme of things.
    It doesn't matter how fast the CPU processing of the data is if it is faster than you can write the data to the sink. The process is:
    1. read data into memory
    2. manipulate that data
    3. write data to a sink (database, file, network)
    The reading and writing of the data are going to be tens of thousands of times slower than any CPU you will be using. That read/write part of the process is the limiting factor of your throughput; not the CPU manipulation of step #2.
    Step #2 can only go as fast as steps #1 and #3 permit.
    Like I said above:
    >
    The best 'file to database' performance you could hope to achieve would be loading simple, 'known to be clean', record of a file into ONE table column defined, perhaps, as VARCHAR2(1000); that is, with NO processing of the record at all to determine column boundaries.
    That performance would be the standard you would measure all others against and would typically be in the hundreds of thousands or millions of records per minute.
    What you would find is that you can perform one heck of a lot of processing on each record without slowing that 'read and load' process down at all.
    >
    Regardless of the sink (DB, file, network) when you are designing data transport services you need to identify the 'slowest' parts. Those are the 'weak links' in the data chain. Once you have identified and tuned those parts the performance of any other step merely needs to be 'slightly' better to avoid becoming a bottleneck.
    That CPU part for step #2 is only rarely, if every the problem. Don't even consider it for specialized tuning until you demonstrate that it is needed.
    Besides, if your code is properly designed and modularized you should be able to 'plug n play' different parse and transform components after the framework is complete and in the performance test stage.
    >
    The only thing that is fixed is that all input files are ASCII (not Unicode) characters in range of 'space' to '~' (decimal 32-126) or common control characters like CR,LF,etc.
    >
    Then you could use byte arrays and byte processing to determine the record boundaries even if you then use String processing for the rest of the manipulation.
    That is what my framework does. You define the character set of the file and a 'set' of allowable record delimiters as Strings in that character set. There can be multiple possible record delimiters and each one can be multi-character (e.g. you can use 'XyZ' if you want.
    The delimiter set is converted to byte arrays and the file is read using RandomAccessFile and double-buffering and a multiple mark/reset functionality. The buffers are then searched for one of the delimiter byte arrays and the location of the delimiter is saved. The resulting byte array is then saved as a 'physical record'.
    Those 'physical records' are then processed to create 'logical records'. The distinction is due to possible embedded record delimiters as you mentioned. One logical record might appear as two physical records if a field has an embedded record delimiter. That is resolved easily since each logical record in the file MUST have the same number of fields.
    So a record with an embedded delimiter will have few fields than required meaning it needs to be combined with one, or more of the following records.
    >
    My files have no metadata, some are comma delimited and some comma and double quote delimited together, to protect the embedded commas inside columns.
    >
    I didn't mean the files themselves needed to contain metadata. I just meant that YOU need to know what metadata to use. For example you need to know that there should ultimately be 10 fields for each record. The file itself may have fewer physical fields due to TRAILING NULLCOS whereby all consecutive NULL fields at the of a record do not need to be present.
    >
    The number of columns in a file is variable and each line in any one file can have a different number of columns. Ragged columns.
    There may be repeated null columns in any like ,,, or "","","" or any combination of the above.
    There may also be spaces between delimiters.
    The files may be UNIX/Linux terminated or Windows Server terminated (CR/LF or CR or LF).
    >
    All of those are basic requirements and none of them present any real issue or problem.
    >
    To make it even harder, there may be embedded LF characters inside the double quoted columns too, which need to be caught and weeded out.
    >
    That only makes it 'harder' in the sense that virtually NONE of the standard software available for processing delimited files take that into account. There have been some attempts (you can find them on the net) for using various 'escaping' techniques to escape those characters where they occur but none of them ever caught on and I have never found any in widespread use.
    The main reason for that is that the software used to create the files to begin with isn't written to ADD the escape characters but is written on the assumption that they won't be needed.
    That read/write for 'escaped' files has to be done in pairs. You need a writer that can write escapes and a matching reader to read them.
    Even the latest version of Informatica and DataStage cannot export a simple one column table that contains an embedded record delimiter and read it back properly. Those tools simply have NO functionality to let you even TRY to detect that embedded delimiters exist let alone do any about it by escaping those characters. I gave up back in the '90s trying to convince the Informatica folk to add that functionality to their tool. It would be simple to do.
    >
    Some numeric columns will also need processing to handle currency signs and numeric formats that are not valid for the database inpu.
    It does not feel like a job for RegEx (I want to be able to maintain the code and complex Regex is often 'write-only' code that a 9200bpm modem would be proud of!) and I don't think PL/SQL will be any faster or easier than Java for this sort of character based work.
    >
    Actually for 'validating' that a string of characters conforms (or not) to a particular format is an excellent application of regular expressions. Though, as you suggest, the actual parsing of a valid string to extract the data is not well-suited for RegEx. That is more appropriate for a custom format class that implements the proper business rules.
    You are correct that PL/SQL is NOT the language to use for such string parsing. However, Oracle does support Java stored procedures so that could be done in the database. I would only recommend pursuing that approach if you were already needing to perform some substantial data validation or processing the DB to begin with.
    >
    I have no control over format of the incoming files, they are coming from all sorts of legacy systems, many from IBM mainframes or AS/400 series, for example. Others from Solaris and Windows.
    >
    Not a problem. You just need to know what the format is so you can parse it properly.
    >
    Some files will be small, some many GB in size.
    >
    Not really relevant except as it relates to the need to SINK the data at some point. The larger the amount of SOURCE data the sooner you need to SINK it to make room for the rest.
    Unfortunately, the very nature of delimited data with varying record lengths and possible embedded delimiters means that you can't really chunk the file to support parallel read operations effectively.
    You need to focus on designing the proper architecture to create a modular framework of readers, writers, parsers, formatters, etc. Your concern with details about String versus Array are way premature at best.
    My framework has been doing what you are proposing and has been in use for over 20 years by three different major nternational clients. I have never had any issues with the level of detail you have asked about in this thread.
    Throughout is limited by the performance of the SOURCE and the SINK. The processing in-between has NEVER been an issu.
    A modular framework allows you to fine-tune or even replace a component at any time with just 'plug n play'. That is what Interfaces are all about. Any code you write for a parser should be based on an interface contract. That allows you to write the initial code using the simplest possible method and then later if, and ONLY if, that particular module becomes a bottlenect, replace that module with one that is more performant.
    Your intital code should ONLY use standard well-established constructs until there is a demonstrated need for something else. For your use case that means String processing, not byte arrays (except for detecting record boundaries).

Maybe you are looking for

  • I am unable to post comments to links on Facebook. This is a recent problem. I can do it with Internet Explorer

    All of a sudden, I can no longer post comments on Facebook. I get a dialogue box and type my comments. But when I hit enter, nothing happens. I opened Internet Explorer and the feature works properly. I have, in the past, been able to post on Firefox

  • Standard FI/CO Cash Journal datasources

    Hi all, Could you please advice me what standard BI Content datasources are used for cash journal operations We need to build a report based on the data from ERP t-code FBCJ Thanks in advance

  • Verifying digital signature

    Hi, I am a newbie to security. All I want is to sign a message with a private key and check the signature with corresponding public key. But running below program I always keep getting "Signature failed. Possible imposter found." What am I doing wron

  • Canvas on a Frame

    I'm trying to add a canvas with a test string on it to a frame. I just see the button but no string. What's wrong here? import java.awt.* public class SurfString extends Canvas      public void paint(Graphics g) {           g.drawString("Hello World!

  • Correct setting for exchange rates differences

    Hey SAP B1 experts, I have a problem with automatic generated exchange rates differences in B1. Situation: - I inserted all correct general ledgers in the g/l account determinations. - The seeting in the item masterdata for "set g/l accounts by" is s