Best approach to report on ports in use during the month

I have been asked to come up with a report once a month outlining the network devices ports that are in use per cust
omer location.
I see the report device manager but it doesnt appear to give the simple number of ports in use by the device during the past 24 hours.
Any thoughts or ideas would be great. It appears our customer doesnt want to pay for installed ports, just ports that are used during the month.
Thanks,

Hi Peter,
You don't say what type of reporting that you want to do? I'm assuming that it is probably something like displaying the last 1000 most recently approved forms or similar?
If that's the case then you may be able to achieve what you need using the Search Core Results Web Part with some clever querying and is likely to be the most performant method.
Once you have a result set back, you can then look at styling this using XSLT in the SCRWP or alternatively look at using the Search Query Model from code in a farm solution/webpart.
Alternatively you could look at using SSIS to extract the data using the SharePoint List into a SQL table on a nightly basis. (This article covers how to get data out of SharePoint and into SSIS..
http://msdn.microsoft.com/en-us/library/hh368261.aspx)
Regards
Paul.
Please ensure that you mark a question as Answered once you receive a satisfactory response. This helps people in future when searching and helps prevent the same questions being asked multiple times.

Similar Messages

  • Best Approach for Reporting on SAP HANA Views

    Hi,
    Kindly provide information w.r.t the best approach for the reporting on HANA views for the architecture displayed below:
    We are on a lookout for information mainly around the following points:
    There are two reporting options which are known to us and listed below namely:
    Reporting on HANA views through SAP BW  (View > VirtualProvider > BEx > BI 4.1)
    Reporting on HANA views in ECC using BI 4.1 tools
            Which is the best option for reporting (please provide supportive reasons : as in advantages and limitations)?
             In case a better approach exists, please let us know of the same.
    Best approach for reporting option on a mixed scenario wherein data of BW and HANA views is to be utilized together.

    Hi Alston,
    To be honest I did not understand the architecture that you have figured out in your message.
    Do you have HANA instance as far as I understood and one ERP and BW is running on HANA. Or there might be 2 HANA instance and ERP and BW are running independently.
    Anyway If you have HANA you have many options to present data by using analytic views. Also you have BW on HANA as EDW. So for both you can use BO and Lumira as well for presenting data.
    Check this document as well: http://scn.sap.com/docs/DOC-34403

  • How to findout vnc port no. used by the each guest vm

    hi,
    i am using OVM 2.2.0
    how to findout vnc port no. used by the each guest vm using command line on the VM server
    thanks in advance..

    hi Avi,
    Thanks for your reply
    i tried but still no success & iam getting the following error
    [root@OVM-SERVER-1 ~]# xm list
    Name ID Mem VCPUs State Time(s)
    Domain-0 0 543 2 r----- 459.1
    test 1 300 1 r----- 13.0
    [root@OVM-SERVER-1 ~]# virsh dumpxml
    error: command 'dumpxml' requires <domain> option
    [root@OVM-SERVER-1 ~]# virsh dumpxml 1
    libvir: Remote error : No such file or directory
    libvir: warning : Failed to find the network: Is the daemon running ?
    libvir: Xen Daemon error : internal error domain information incomplete, missing kernel & bootloader
    [root@OVM-SERVER-1 ~]# virsh dumpxml test
    libvir: Remote error : No such file or directory
    libvir: warning : Failed to find the network: Is the daemon running ?
    libvir: Xen Daemon error : internal error domain information incomplete, missing kernel & bootloader

  • What is best approach to report building in my case?

    Hi all,
    I'm just getting started with Crystal Reports for our Swing-based desktop application.  We need the ability to generate PDF and XLS reports, perhaps later adding web-based dashboarding and interactive reports.  I'm trying to determine the best approach to take with Crystal Reports to fit our application's data.
    Our app stores results in a separate database (either Oracle, SQLServer, or Apache Derby).  The result records contain lots of ID lookups to tables in another database.  This makes using straight SQL for reporting difficult as I would like to avoid cross-database queries.  So I'm thinking of using the POJO reporting approach where our app gathers the results, generates POJOs, and then passes them to the report.
    My concern with this POJO approach is that it seems to require loading all results into memory and generating the report in one big step.  I've read other posts referring to heap issues.  Is there a way to avoid this?  Some-how to page through report data?
    I've also read that Crystal Reports can work with any data provider that implements ResultSet.  Is this true?  If so, could I create my own custom ResultSet implementation that would let me page through my results without loading everything into memory at-once?  If possible, please point me to the documentation for this approach.  I haven't been able to find any examples.
    If there is a better approach that I haven't mentioned, please let me know. 
    Thanks in advance,
    Guy

    The first option is the best one for performance.  The only time you should use result sets is when you need to do runtime manipulation of the data through your application and is not acheivable in a stored procedure.

  • What's the best approach to migrate to Snow Leopard using the time machine?

    I have a time capsule, and have noticed the restore from backup option once I had to replace the HD on my MacBook Pro.
    I am about to buy a new MacBook Pro which will probably ship with Snow Leopard, and am wandering, if when I setup the new OS I can restore from a backup of my old system?
    Thanks
    Miguel

    don't use the full system restore from backup option with a new computer. that can only be used on the same exact computer. also, that option erases the destination drive and replaces it with the copy of the backed up system. this is not what you want.
    when you first turn on the new computer you'll get a setup assistant which will give you an option to Migrate your user data and applications from a TM backup. you can also do it later using Migration Assistant located in Applications/Utilities.

  • Which port to use for the peer-keep alive

    Hi All,
    We have 2 Nexus 6001s in our data center.
    The management port of each 6001 is connected to the other and this link is used as the peer keep alive link.
    My colleague is suggesting that we use one of the inline data ports as the keep alive link.
    Can you please advise on the pros and cons of using management/inline port as keep live link and the best practise to follow in this case?
    Thanks,
    Pete

    Hi Pete,
    Here are the best recommendation in order of preference.
    1. Use mgmt0 (along with management traffic)
         * Pros: whats good on this option is you are totally separating the VPC Peer keepalive link on another VRF (management) and does not mingle with the data or global vrf..
         * Cons: VPC PKL is dependent on the OOB management switch.
    2. Use dedicated 1G/10GE front panel ports. 
        * Pros - can just be a direct link between the N6K pair and not dependent on other boxes. 
        * Cons - you need extra SFPs for VPC PKL while the VPC PKL traffic join the global VRF.
    HTH
    Jay Ocampo

  • Best approach to delete records that are not in the source table anymore.

    I have a situation where I need to remove records from dimensions that are not in the source data anymore. Right now we are not maintaing history, i.e. not using SCD but planning for the next release. If we did that it would be easy to figure the latest records. The load is nightly and records are updated and new added.
    The approach that I am considering is to join the dimension tables the the sources on keys and delete what doesn't join. However, is there perhaps some function in OWB that would allow to do this automatically on import so it can be also in place for the future?
    Thanks!

    Bear in mind that deleting dimension records becomes problematic if you have facts attached to them. Just because this record is no longer in the active set doesn't mean that it wasn't used historically, and so have foreign key constraints on it in your database. IF this is the case, a short-term solution would be to add an expiry_date field to the dimension and update the load to set this value when the record disappears rather than to delete it.
    And to do that, use the target dimension as a source table, outer join it to the actual source table on the natural key, and so your update will set expiry_date=nvl(expiry_date,sysdate) to set to sysdate if this record has not already been expired on all records where the outer join fails.
    Further consideration: what do you do if the record is re-inserted into the source table? create a new dimension key? Or remove the expiry date?
    But I will say that I am not a fan of deleting records in most circumstances. What do you do if you discover a calculation error and need to fix that and republish historical cubes? Without the historical data, you lose the ability to do things like that.

  • Which connection port to use on the Canon MP530 printer

    I'm not sure if I'm using the correct port on this printer to connect to my computer. There's a port on the back that looks like something other than a perfectly flat edged wide rectangular USB port, but it has the symbols that you see on the end of USB 2.0 cables molded above the port on the printer indicating it's that kind of port. I have another port on the front of the machine that looks like it fits a USB 2.0 cable, being that it has a nice, wide and flat connection area. It has some other kind of weird symbol though that I don't recognize molded above the port. Am I confusing it with something else? When I use a USB 2.0 male A connector cable that I connect one end of to my Mac and the other also with a male A conncector to that port I'm having no luck getting the printer to be seen by my computer even after installing the latest drivers that Canon's website recommends downloading from its' site. Can anybody help me figure out if I'm simply misunderstanding the kind of cable I need to connect my printer to my computer?

    I was using the wrong kind of USB cable to connect the printer and the computer. I needed to use a USB type B connector on the printer and a USB type A connector on the printer. My problem is now solved, although I still don't know what the other port is for that I was connecting my USB type A connector to. There's no mention of it on Canon's website or the papers and manuals that came with the printer. That's lousy manual building on Canon's part for this model.

  • Report to Generate Materials Used for the Production of FG/SFG Materiak

    Dear Experts,
    I am working for a Cement Factory. CLINKER is a SFG used for the production of FG Cement.
    RMOPCK1 and FINECOAL are another SFG used for the Production of CLINKER
    RMOPCK1
         +               = CLINKER + (Different Materials) = CEMENT
    FINECOAL
    For CLINKER we will create process order every month similarly for RMOPCK1 and FINECOAL.
    RMOPCK1 is using 5 Raw materials and FINECOAL is using 1 raw material
    Limestone + Silica 1 + Silica 2 + Bauxite 1+ Ironore 1 = RMOPCK1 and
    Rawcoal = FINECOAL
    All the raw materials, SFGs are used for creating different variety of cement.
    I want to know for a certain period how much is the CLINKER produced and How much is the raw materials used for producing the CLINKER.
    When i try through KOB1 with the process orders for CLINKER it is giving me total usage ofr RMOPCK1 and FINECOAL for producing the CLINKER.
    I Want to know for ex:
    the quantity of raw materials (Limestone + Silica 1 + Silica 2 + Bauxite 1+ Ironore 1 )used for producing ex: 1000 TON of CLINKER.
    Please help. Thanks
    Edited by: simonjohn on Oct 9, 2011 6:46 PM

    Thanks

  • Best practice for setting an environment variable used during NW AS startup

    We have installed some code which is running in both the ABAP and JAVA environment, and some functionality of this code is determined by the setting of operating system environment variables. We have therefore changed the .sapenv_<host>.csh and .sapenv_<host>.sh scripts found in the <sid>adm user home directory. This works, but we are wondering what happens when SAP is upgraded, and if these custom shell script changes to the .sh and .csh scripts will be overwritten during such an upgrade. Is there a better way to set environment variables so they can be used by the SAP server software when it has been started from <sid>adm user ?

    Hi,
    Thankyou. I was concerned that if I did that there might be a case where the .profile is not used, e.g. when a non-interactive process is started I was not sure if .profile is used.
    What do you mean with non-interactive?
    If you login to your machine as sidadm the profile is invoked using one of the files you meant. So when you start your Engine the Environment is property set. If another process is spawned or forked from a running process it inherits / uses the same Environment.
    Also, on one of my servers I have a .profile a .login and also a .cshrc file. Do I need to update all of these ?
    the .profile is used by bash and ksh
    The .cshrc is used by csh and it is included via source on every Shell Startup if not invoked with the -f Flag
    the .login is also used by csh and it is included via source from the .cshrc
    So if you want to support all shells you should update the .profile (bash and ksh) and one of .cshrc or .login for csh or tcsh
    In my /etc/passwd the <sid>adm user is configured with /bin/csh shell, so I think this means my .cshrc will be used and not the .profile ? Is this correct ?
    Yes correct, as described above!
    Hope this helps
    Cheers

  • What criteria should I use for the monthly updates

    I am just starting to setup Software Update for SCCM 2012 R2. The configuration is done with the Classifications and Products selected and working.  I selected the Classifications:
    Critical Updates
    Definition Updates
    Security Updates
    Service Packs
    Update Rollups
    Updates
    Now I am just looking for the criteria to use for my Software Update Groups.
    I watched this GREAT VIDEO "SCCM 2012 SP1 and the new way handling Software Updates explained", URL: 
    http://technet.microsoft.com/en-us/video/sccm-2012-sp1-and-the-new-way-handling-software-updates-explained.aspx regarding the "new way" we are to be doing the groups and updates.  The video is
    very informative and explains a lot, but it does not say what criteria is used.
    This is the breakdown from the video, but again it doesn't say what criteria may be:
     • Keep Software Update Groups (SUG) Limited to 1,000 updates
     • Don't split products into different SUG's
     • Enabled Delta replication for the Software Update Points (SUP)
     • Set High priority for the Software Distribution Group
    Software Update Groups
     • Exceptions group (not to deploy)
     • TMG Definition Updates
     • SCEP 2012 Definition Updates
     • Outlook Definition Updates
     • 2003-2010_All Updates
     • 2011_All Updates
     • 2012_All Updates
     • 2013-01_All Updates
     • 2013-02_All Updates
    Deployment Packages
     • TMG Definition Updates
     • SCEP 2012 Definition Updates
     • Outlook Definition Updates
     • 2003-2010_All Updates
     • 2011_All Updates
     • 2012_All Updates
     • 2013_All Updates
    Automatic Deployment Rules
     • Microsoft Outlook 2010/2013 Definition Updates
      ○ Run after synchronization
     • Monthly Updates
      ○ Every Month on day 16 of the month
      ○ Uncheck "Enable the deployment after this rule is run
     • SCEP 2012 Definition Updates
      ○ Run after synchronization
     • TMG Definition Updates
      ○ Run after updates
    I am specifically looking mostly for the Monthly updates and to make sure they are only those needed.  Most I have seen use:
    Superseded:  No
    Expired:  No
    Bulletin starts with "MS"
    I don't really know what the "MS" limits things to.  Sorry a lot of information and not very clearly laid out. 
    Find this post helpful? Does this post answer your question? Be sure to mark it appropriately to help others find answers to their searches.

    I am specifically looking mostly for the Monthly updates and to make sure they are only those needed.  Most I have seen use:
    Superseded:  No
    Expired:  No
    Bulletin starts with "MS"
    I don't really know what the "MS" limits things to. 
    I use a similar approach, as one part of my routine.
    When you filter/search on Superseded=NO, that discards the updates which are flagged as superseded, but this depends totally upon your routine and understanding of the nature of supersedence, and it's also time-sensitive, since CM12 will by default, kill off
    superseded updates after a time. A Superseded update *is* deployable, but you'd likely only do that, if the superseding update is unsuitable for your environment for some reason, e.g. it introduces a bug or regression that you can't afford.
    Expired=NO, well, again that's to limit your filter/search results. Since an expired update is not deployable, why bother to have it in your results list, if you are building a deployment. It's handy to know that something is, or has become, expired, if
    you are troubleshooting or analysing a "why did/didn't xyz happen?", but for deploying, expired updates are just useless clutter.
    Bulletin starts with "MS" can be handy to filter out the "non-security" updates, since "security" updates are always issued with an MSyy-nnn bulletin ID.
    Note that the bizarre MS definition of "security update" does not mean that security bulletins cover *all* security-related updates, since there are updates that are released, which do provide security fixes, enhancement, etc, but they don't always attract
    a security bulletin ID, so it really pays to look at "starts with MS" and also "does not start with MS" - I use both, when searching/filtering, as a way to slice the assessment/analysis task in "half" (not evenly sliced ;).
    And also note that some updates (security advisories, hotfixes, rollups) may not fall into the classification you might expect, or, might not be published into the MU/WSUS catalog feed at all..
    An example of a unexpected classification, is the recent 2917500, which is classified under "Feature Packs", even though it is an updated set of revoked root certificates (due to yet another compromised root CA cert). [I noted you didn't mention the Feature
    Packs classification, you might want to revisit that choice...]
    Personally, I'm not yet using ADRs, not because I don't like them, but because our security ops team are being especially painful at the moment and keep changing their mind about stuff, plus, we are undergoing a massive application change cycle and the change/release
    team are hypersensitive just now.
    One method I also use, is to periodically run the "updates needed but not deployed" report, as it reveals to me where I need to sync a new product, or have overlooked an update or misunderstood/incorrectly assessed as "we don't need that, or that doesn't
    apply to us". it also reveals where I have mucked up and not been diligent about taking a templated approach to deployments, e.g. if I forget to distribute to all DPs or whatever.
    I find that even though I might take what I consider to be a very generic approach, others in the forums clearly have different environments/operational models to me, and different needs. For patching, I mainly focus on Windows client OS and Office,
    and not really on Windows server OS, since somebody else handles server builds and patching in my org.
    Don
    (Please take a moment to "Vote as Helpful" and/or "Mark as Answer", where applicable.
    This helps the community, keeps the forums tidy, and recognises useful contributions. Thanks!)

  • How to generate a Yearly report based on a calculation at the Month level

    Have the need to create a report as follows. Any ideas on how this can be accomplished in OBI is appreciated. I have already tried different ways but non worked for me.
    The data is stored in a table at day level as follows:
    Day Amount_A Amouont_B
    1/1/2008     100     100
    1/15/2008     200     100
    2/1/2008     100     400
    2/15/2008     300     200
    1/1/2009     100     300
    1/15/2009     100     200
    2/1/2009     200     100
    2/15/2009     400     300
    The report should be displayed at Year level. Amount_A is just the summation of Amount_A from the table at daily level rolled up to the Year level in the Time dimension. The same thing for Amount_B.
    The formula for Absolute_Error is Absolute(Amount_A - Amount_B). But the problem is that it has to be calculated at the Month level instead of Day Level. So following is the logic for Absolute_Error:
    Month     Amount_A Amount_B     Absolute_Error
    Jan-2008     300     200     100
    Feb-2008     400     600     200
    Jan-2009     200     500     300
    Feb-2009     600     400     200
    The report should be displayed as follows:
    Year Amount_A Amount_B     Absolute_Error
    2008     700     800     300
    2009     800     900     500
    Note that the calculation of Absolute_Error results in a different value if it is calculated at the Month level and summed up to Year than if it were calculated at the Day level and then summed up to Year. It is required to be based on Month level for this report.
    Is there a way to do this without having to build an aggregated fact table at the Month level?

    Hi.
    Do this:
    1. Create Amount_A and Amount_B in BMM without SUM as default aggregation rule.
    2. Now, just go to Answers and make report with three columns:
    YEAR -- EXPRESSION 1 -- EXPRESSION 2
    EXPRESSION 1 is:
    sum(Amount_A) - sum(Amount_B)
    EXPRESSION 2 is:
    sum
    abs
    sum(Amount_A by MONTH)
    sum(Amount_B by MONTH)
    My example in Answers:
    TIMES.CALENDAR_YEAR
    sum(SALES.QUANTITY_SOLD_NORMAL) - sum(SALES.AMOUNT_SOLD_NORMAL)
    sum( abs(sum(SALES.QUANTITY_SOLD_NORMAL by TIMES.CALENDAR_MONTH_DESC) - sum(SALES.AMOUNT_SOLD_NORMAL by TIMES.CALENDAR_MONTH_DESC) ) )
    This will first summarize amount A and amount B on a month level and then do a difference, after that ABS and then sum on the year level.
    This is workaround to avoid larger RPD changes.
    Regards,
    Goran O
    http://108obiee.blogspot.com/

  • I have a 4630e and I want to print out the report of my paper useage for the month.

    I have a 4630, 3-in-one. I'm in the automatic refill program.  I want to get to and print the report of useage for the month to see if I can reduce the among of pap I have allotted myself.
    This question was solved.
    View Solution.

    Hi Georgiapat465, welcome to the HP Forums. If you want the Month To Date statistics of your Instant Ink usage, you can log into your account on hpconnected.com and go to the services tab, and then find Instant Ink there. That page will show you on screen your usage.
    I hope this helps. Let me know if you have any other concerns.
    TwoPointOh
    I work on behalf of HP
    Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
    Click the “Kudos, Thumbs Up" on the bottom to say “Thanks” for helping!

  • One time Discount Coupon to be used during the Sales Order Creation

    I need help to configure a one time coupon to be used during a Sales Order creation. During 2009 our customers that have purchased more than certain amount we are going to give them a discount Coupon with a code that they can use just one time during his next purchase. I need help to configure this request.
    Thanks

    Hi Gilberto
    Go VK11.
    After maintaining the material & price, please go to -->Additional data.
    You will be displayed with Validity, Assignments, Assignments for retail promotion, Limits for Pricing, Payments.
    In the 'Limits for Pricing' you have three options - Max. condition value, Max number of orders & Max condition base value.
    In the Max number of Orders - enter 1 to avail this special price only for first order.
    Importent Note: to activate the ' Limit of Pricing' -- you need to tick 'Condition update' for the particular contion type, in condition type customising.
    FOr eg. condition type - ZR00 - you need to have special price for this condition type & want to limit only for first order order.
    In the customising for ZR00 - 'condition update' need to be ticked, then only you can get the above option.
    i hope it is clear to you
    thank you
    Anirudh

  • Best approach to add Task interaction processes using BeanShell in ExcecuteScript operation

    I am wondering if the following is the best way (thinking not) to accomplish the goal of determining the possible routes from a Task (Task ID is known) using BeanShell in the ExecuteScript operation in a short-lived LC process (taken from API docs and tweaked slightly).  The code does work, this is just a question of what would be optimal to build similar processes that can reach into more details.  I would like to know the best practice before building more such processes.
    import java.util.*;
    import com.adobe.idp.dsc.clientsdk.ServiceClientFactory;
    import com.adobe.idp.dsc.clientsdk.ServiceClientFactoryProperties;
    import com.adobe.idp.taskmanager.dsc.client.query.TaskRow;
    import com.adobe.idp.taskmanager.dsc.client.query.TaskSearchFilter;
    import com.adobe.idp.taskmanager.dsc.client.task.ParticipantInfo;
    import com.adobe.idp.taskmanager.dsc.client.task.TaskInfo;
    import com.adobe.idp.taskmanager.dsc.client.task.TaskManager;
    import com.adobe.idp.taskmanager.dsc.client.*;
    import com.adobe.idp.um.api.infomodel.Principal;
    import com.adobe.livecycle.usermanager.client.DirectoryManagerServiceClient;
    import java.util.List;
    Properties connectionProps = new Properties();
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_DEFAULT_EJB_ENDPOINT, "jnp://servername:1099");
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_TRANSPORT_PROTOCOL,ServiceC lientFactoryProperties.DSC_EJB_PROTOCOL);         
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_SERVER_TYPE, "JBoss");
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_CREDENTIAL_USERNAME, "administrator");
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_CREDENTIAL_PASSWORD, "password");
    ServiceClientFactory myFactory = ServiceClientFactory.createInstance(connectionProps);
    TaskManagerQueryService queryManager = TaskManagerClientFactory.getQueryManager(myFactory);
    TaskManager taskManager = TaskManagerClientFactory.getTaskManager(myFactory);
    long taskId = patExecContext.getProcessDataLongValue("/process_data/@taskId");
    TaskInfo taskInfo= taskManager.getTaskInfo(taskId);
    String [] routeNames = taskInfo.getRouteList();
    List routeNameList = patExecContext.getProcessDataListValue("/process_data/routes");
    for (int i=0; i<routeNames.length; i++) {
        String currentRouteName=(String)routeNames[i];
        routeNameList.add(currentRouteName);
    patExecContext.setProcessDataListValue("/process_data/routes",routeNameList);

    I am wondering if the following is the best way (thinking not) to accomplish the goal of determining the possible routes from a Task (Task ID is known) using BeanShell in the ExecuteScript operation in a short-lived LC process (taken from API docs and tweaked slightly).  The code does work, this is just a question of what would be optimal to build similar processes that can reach into more details.  I would like to know the best practice before building more such processes.
    import java.util.*;
    import com.adobe.idp.dsc.clientsdk.ServiceClientFactory;
    import com.adobe.idp.dsc.clientsdk.ServiceClientFactoryProperties;
    import com.adobe.idp.taskmanager.dsc.client.query.TaskRow;
    import com.adobe.idp.taskmanager.dsc.client.query.TaskSearchFilter;
    import com.adobe.idp.taskmanager.dsc.client.task.ParticipantInfo;
    import com.adobe.idp.taskmanager.dsc.client.task.TaskInfo;
    import com.adobe.idp.taskmanager.dsc.client.task.TaskManager;
    import com.adobe.idp.taskmanager.dsc.client.*;
    import com.adobe.idp.um.api.infomodel.Principal;
    import com.adobe.livecycle.usermanager.client.DirectoryManagerServiceClient;
    import java.util.List;
    Properties connectionProps = new Properties();
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_DEFAULT_EJB_ENDPOINT, "jnp://servername:1099");
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_TRANSPORT_PROTOCOL,ServiceC lientFactoryProperties.DSC_EJB_PROTOCOL);         
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_SERVER_TYPE, "JBoss");
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_CREDENTIAL_USERNAME, "administrator");
    connectionProps.setProperty(ServiceClientFactoryProperties.DSC_CREDENTIAL_PASSWORD, "password");
    ServiceClientFactory myFactory = ServiceClientFactory.createInstance(connectionProps);
    TaskManagerQueryService queryManager = TaskManagerClientFactory.getQueryManager(myFactory);
    TaskManager taskManager = TaskManagerClientFactory.getTaskManager(myFactory);
    long taskId = patExecContext.getProcessDataLongValue("/process_data/@taskId");
    TaskInfo taskInfo= taskManager.getTaskInfo(taskId);
    String [] routeNames = taskInfo.getRouteList();
    List routeNameList = patExecContext.getProcessDataListValue("/process_data/routes");
    for (int i=0; i<routeNames.length; i++) {
        String currentRouteName=(String)routeNames[i];
        routeNameList.add(currentRouteName);
    patExecContext.setProcessDataListValue("/process_data/routes",routeNameList);

Maybe you are looking for

  • Side bar auto expanding to fit longest filename in file dialog windows

    This may sound like a small problem but I have been finding it quite frustrating and it seems to have been introduced with the introduction of Lion (OS 10.7). I use the Finders sidebar to quickly access project folders. Now the names of these folders

  • Airport extrem working with expresses to create a powerful network, but how

    I have one extreme base station and 3 expresses in a 3 story house with a basement. My network base is on the 3rd floor and I want the base station to link from that one to others to boost the signal. I thought I had it all set but am not getting goo

  • Broken AppleScript for tab delimited importing into AddressBook

    I have a tab delimited file that is not quite ready to import into AddressBook - it needs some massaging first (field merging, context sensitive edits etc.). A book (OS X Missing Manual) referred me to some AppleScripts used by AddressBook which are

  • Can't update my IOS on my ipad

    I need to update my ipad to IOS 5. I have synced it with my computer itunes, I think. (At least, I got the message that it was synced and 'OK to disconnect'). However, when I try to update the IOS it tells me I will lose stuff on the ipad because I h

  • Adobe 6. Postscript v5.2

    What the heck is the difference between Postscript driver v5.2 build 5.1.2600.0 and 6.0.6000.16386 I NEED to have the latter installed and can't find it or how to install it and Adobe Tech support is of NO help Here is the issue at hand We have about