Approach for reusable code/region for discussions

I have a workspace with N applications, all share a user table and security. Options are presented based on permissions and functions to check them.
In many of apps I want to put "discussion forum" logic. I can handle the permissions part, and linking items in app to topics where needed...but...here is my question.
I want to do this in a way that takes advantage of modulurization and code reusability. I don't want to program the same exact logic (create post, review post, view posts, print discussion board for topic)...in a region in every application that has a discussion board...
I want to create discussion board logic once and call it in each respective application.
Suggestions?
I am thinking the best way in HTMLDB is to create my own application called "discuss" and call it from each of the other apps, then return to the calling app (in some way, have not experimented with this)...and sense some trade-offs.
Thinking it may be hard to integrate discussions with the items in each app (if I want to show app item whatever it is or a list of them and the discussion board on same page in app)...
I could code-it/call-it in PL/SQL (Display_topic(p_topic_id_in) then I give up pagination and other htmldb goodies in discussion logic? or Does code for this already exist? Is there a way to call the same region from many applications?
Any tips or suggestion for best way to proceed? I have a few methods and looking for thoughts.
Thanks,
Meyer

Hi Meyer,
Sounds like an interesting problem/application. I've toyed with the idea of something similar, though all I've done so far is think about it due to other work priorities.
One thing you might want to think about, and I have no idea yet as to how this could be done (Maybe Raj has some nifty javascript to do this?), would be that your apps could have a button (or other similar link) for the discussion app. The link would open a new window for the discussion app.
A potential problem might be having two different apps running on the users machines though. I've run into problems on my machine, when I've tried to have one window open as a developer and another window open as a user, and the session states get mixed up between windows. I never fully investigated exactly what was going or why, as I have two computers at my desk and it was faster and easier just to start using one as user and the other as developer.
Just an idea that maybe others could clarify on.
Bill Ferguson

Similar Messages

  • Performance of the LOD approach for network analisys

    Hi to all,
    I open new thread for speak about performance of LOD approach for Network analysis using Java API as continuing from this thread:
    Partitioning of the Network for using LOD API
    I remember you that:
    - I'm using Oracle 11g R2;
    - my network consist of 7.817.372 of links and 6.662.079 of nodes (big network);
    - LINK_LEVEL of links table is setted as NULL;
    - I have partitioned the network with these procedure:
    partition:
    EXEC sdo_net.spatial_partition('ITALIA', 'ITALIA_PART$', 10000, 'WORK_DIR_ITALIA', 'ITALIA_PART.log', 'a', 1);
    and for partition blob:
    EXEC sdo_net.generate_partition_blobs('ITALIA', 1, 'ITALIA_PBLOB$', true, true, 'WORK_DIR_ITALIA', 'ITALIA_PBLOB.log', 'a');
    - I'm using LOD Java API for network analysis with Netbeans IDE; I took the code from NDM_tutorial of Hillsborough network.
    My first analysis is the compute of the shortest path between two extreme nodes (about 1.500 km of distance) and between two near nodes (about 80 km of distance) using Dijkstra and AStar algorithm.
    I did the following tests on execution times of the compute of the shortest path:
    1) with partition of maximum 10000 nodes for partition (from log file I read that were been generated 1024 partitions with 1 link level):
    - between two extreme nodes, about 1 minute and 50 seconds.
    - between two near nodes, about 20 seconds.
    then I have re-executed the two previous procedures where I changed the value of maximum nodes for partition and I did other tests:
    2) with partition of maximum 15000 nodes for partition (from log file I read that were been generated 512 partitions with 1 link level):
    - between two extreme nodes, about 1 minute and 50 seconds and sometimes it goes out of memory
    - between two near nodes, about 20 seconds.
    3) with partition of maximum 5000 nodes for partition (from log file I read that were been generated 2048 partitions with 1 link level):
    - between two extreme nodes, about 1 minute and 50 seconds and sometimes it goes out of memory
    - between two near nodes, about 20 seconds.
    4) with partition of maximum 2000 nodes for partition (from log file I read that were been generated 4096 partitions with 1 link level):
    - between two extreme nodes, about 1 minute and 50 seconds and sometimes it goes out of memory
    - between two near nodes, about 15 seconds.
    I think that there is any problem because I expect execution times more lower (maximum 5-6 seconds for extreme nodes).
    Even changing the maximum number of nodes to partition the execution times do not change much.
    I remember you that with in-memory approach on Oracle 10gR2 the execution times for every computation between two nodes was of about 4 minutes and sometimes it goes out of memory. With LOD on Oracle 11gR2 the execution times are reduced but are too long for me.
    Now, my questions are:
    - @Jack Wang: (if you're reading me) do you know what are the execution times for the compute of shortest path between two nodes for USA network (about 1500 km of distance)? I remember that you have used LOD for USA (56 Millions of link and 20 Millions of nodes).
    - Do you think that I wrong anything? How can I do for reduce the execution times of network analysis?
    If you need any more information just ask.
    Thank you to all in advance very much.

    Jack Wang wrote:
    Are the 12/13 seconds computation times for the near node pair?Yes.
    You can look at an example for computing hierarchical shortest path with a 2-level network(NAVTEQ_SF) under ndm_tutorial
    ~ \ndm_tutorial\examples\java\src\lod\SpWithMultiLinkLevels.java.But I think that yesterday I had not yet used the LINK_LEVEL = 2 in network analysis. Infact, I used the code into ShortestPathAnalysis.java where linkLevel is always setted = 1.
    Now, I'm using the example SpWithMultiLinkLevels.java and I'm seeing that linkLevel is setted = 2 before the execution of Dijkstra and AStar algorithm. This code works with Hillsborough_network. Instead with my network (note that I'm using a region of Italy with 700.000 link and 600.000 nodes called ITAI11_METERS for to test a network with 2 LINK_LEVEL), I have a problem during readPartitionFromBlob.
    This is the code:
    package calcolopercorsolod;
    import java.io.*;
    import java.sql.*;
    import java.text.*;
    import java.util.*;
    import oracle.jdbc.OracleConnection;
    import oracle.jdbc.pool.OracleDataSource;
    import oracle.spatial.util.Logger;
    import oracle.spatial.network.UserDataMetadata;
    import oracle.spatial.network.lod.*;
    import oracle.spatial.network.lod.config.*;
    import oracle.spatial.network.lod.util.PrintUtility;
    public class SpWithMultiLinkLevels
      private static NetworkAnalyst analyst;
      private static NetworkIO networkIO;
      private static void setLogLevel(String logLevel)
        if("FATAL".equalsIgnoreCase(logLevel))
            Logger.setGlobalLevel(Logger.LEVEL_FATAL);
        else if("ERROR".equalsIgnoreCase(logLevel))
            Logger.setGlobalLevel(Logger.LEVEL_ERROR);
        else if("WARN".equalsIgnoreCase(logLevel))
            Logger.setGlobalLevel(Logger.LEVEL_WARN);
        else if("INFO".equalsIgnoreCase(logLevel))
            Logger.setGlobalLevel(Logger.LEVEL_INFO);
        else if("DEBUG".equalsIgnoreCase(logLevel))
            Logger.setGlobalLevel(Logger.LEVEL_DEBUG);
        else if("FINEST".equalsIgnoreCase(logLevel))
            Logger.setGlobalLevel(Logger.LEVEL_FINEST);
        else  //default: set to ERROR
            Logger.setGlobalLevel(Logger.LEVEL_ERROR);
      public static void main(String[] args) throws Exception
            String configXmlFile = "LODConfigs.xml";
            String logLevel    =    "DEBUG";
            String dbUrl       = "jdbc:oracle:thin:@oracle:1521:mySID";
            String dbUser      = "myUser";
            String dbPassword  = "myPass";
            String networkName = "ITAI11_METERS";
            long startNodeId = 15323;
            long endNodeId   = 431593;
        int linkLevel      = 1;
        double costThreshold = 1550;
        int numHighLevelNeighbors = 8;
        double costMultiplier = 1.5;
        Connection conn    = null;
        //get input parameters
        for(int i=0; i<args.length; i++)
            if(args.equalsIgnoreCase("-dbUrl"))
    dbUrl = args[i+1];
    else if(args[i].equalsIgnoreCase("-dbUser"))
    dbUser = args[i+1];
    else if(args[i].equalsIgnoreCase("-dbPassword"))
    dbPassword = args[i+1];
    else if(args[i].equalsIgnoreCase("-networkName") && args[i+1]!=null)
    networkName = args[i+1].toUpperCase();
    else if(args[i].equalsIgnoreCase("-linkLevel"))
    linkLevel = Integer.parseInt(args[i+1]);
    else if(args[i].equalsIgnoreCase("-configXmlFile"))
    configXmlFile = args[i+1];
    else if(args[i].equalsIgnoreCase("-logLevel"))
    logLevel = args[i+1];
    // opening connection
    conn = LODNetworkManager.getConnection(dbUrl, dbUser, dbPassword);
    System.out.println("Network analysis for "+networkName);
    setLogLevel(logLevel);
    //load user specified LOD configuration (optional),
    //otherwise default configuration will be used
    InputStream config = ClassLoader.getSystemResourceAsStream(configXmlFile);
    LODNetworkManager.getConfigManager().loadConfig(config);
    //LODConfig c = LODNetworkManager.getConfigManager().getConfig(networkName);
    //get network input/output object
    networkIO = LODNetworkManager.getCachedNetworkIO(
    conn, networkName, networkName, null);
    //get network analyst
    analyst = LODNetworkManager.getNetworkAnalyst(networkIO);
    double[] costThresholds = {costThreshold};
    try
    System.out.println("*****Begin: Shortest Path with Multiple Link Levels");
    System.out.println("*****Shortest Path Using Dijkstra");
    String algorithm = "DIJKSTRA";
    linkLevel = 2;
    costThreshold = 5000;
    LogicalSubPath subPath = analyst.shortestPathDijkstra(new PointOnNet(startNodeId),
    new PointOnNet(endNodeId),linkLevel, null);
    PrintUtility.print(System.out, subPath, true, 10000, 0);
    System.out.println("*****End: Shortest path using Dijkstra");
    System.out.println("*****Shortest Path using Astar");
    HeuristicCostFunction costFunction = new GeodeticCostFunction(0,-1, 0, -2);
    LinkLevelSelector lls = new DynamicLinkLevelSelector(
    analyst, 2, costFunction, costThresholds,
    numHighLevelNeighbors, costMultiplier, null);
    subPath = analyst.shortestPathAStar(
    new PointOnNet(startNodeId), new PointOnNet(endNodeId), null, costFunction, lls);
    PrintUtility.print(System.out, subPath, true, 10000, 0);
    System.out.println("*****End: Shortest Path Using Astar");
    System.out.println("*****End: Shortest Path with Multiple Link Levels");
    catch (Exception e)
    e.printStackTrace();
    if(conn!=null)
    try{conn.close();} catch(Exception ignore){}
    and this is the output with the error:...
    [LODNetworkAdaptorSDO::isNetworkPartitioned, DEBUG] Query String: SELECT p.PARTITION_ID FROM PROVA.ITAI11_PART$ p WHERE p.LINK_LEVEL = ? AND ROWNUM = 1 [2]
    [QueryUtility::prepareIDListStatement, DEBUG] Query String: SELECT NODE_ID, PARTITION_ID FROM PROVA.ITAI11_PART$ p WHERE p.NODE_ID IN ( SELECT column_value FROM table(:varray) ) AND LINK_LEVEL = ?
    [LODNetworkAdaptorSDO::readNodePartitionIds, DEBUG] Query linkLevel = 2
    [NetworkIOImpl::readLogicalPartition, DEBUG] Read partition from blob table: partition 4, level 2
    [LODNetworkAdaptorSDO::readPartitionBlobEntry, DEBUG] Query String: SELECT BLOB, NUM_INODES, NUM_ENODES, NUM_ILINKS, NUM_ELINKS, NUM_INLINKS, NUM_OUTLINKS, USER_DATA_INCLUDED FROM PROVA.ITAI11_PBLOB$ WHERE PARTITION_ID = ? AND LINK_LEVEL = ? [4,2]
    [QueryUtility::prepareIDListStatement, DEBUG] Query String: SELECT NODE_ID, PARTITION_ID FROM PROVA.ITAI11_PART$ p WHERE p.NODE_ID IN ( SELECT column_value FROM table(:varray) ) AND LINK_LEVEL = ?
    [LODNetworkAdaptorSDO::readNodePartitionIds, DEBUG] Query linkLevel = 1
    [NetworkIOImpl::readLogicalPartition, DEBUG] Read partition from blob table: partition 91, level 1
    [LODNetworkAdaptorSDO::readPartitionBlobEntry, DEBUG] Query String: SELECT BLOB, NUM_INODES, NUM_ENODES, NUM_ILINKS, NUM_ELINKS, NUM_INLINKS, NUM_OUTLINKS, USER_DATA_INCLUDED FROM PROVA.ITAI11_PBLOB$ WHERE PARTITION_ID = ? AND LINK_LEVEL = ? [91,1]
    [NetworkIOImpl::readLogicalPartition, DEBUG] Read partition from blob table: partition 91, level 2
    [LODNetworkAdaptorSDO::readPartitionBlobEntry, DEBUG] Query String: SELECT BLOB, NUM_INODES, NUM_ENODES, NUM_ILINKS, NUM_ELINKS, NUM_INLINKS, NUM_OUTLINKS, USER_DATA_INCLUDED FROM PROVA.ITAI11_PBLOB$ WHERE PARTITION_ID = ? AND LINK_LEVEL = ? [91,2]
    oracle.spatial.network.lod.LODNetworkException: java.lang.NullPointerException
    at oracle.spatial.network.lod.NetworkIOImpl.readPartitionFromBlob(NetworkIOImpl.java:549)
    at oracle.spatial.network.lod.NetworkIOImpl.readLogicalPartition(NetworkIOImpl.java:436)
    at oracle.spatial.network.lod.CachedNetworkIOImpl.readLogicalPartition(CachedNetworkIOImpl.java:114)
    at oracle.spatial.network.lod.CachedNetworkIOImpl.readLogicalPartition(CachedNetworkIOImpl.java:105)
    at oracle.spatial.network.lod.NetworkExplorer.getPartition(NetworkExplorer.java:335)
    at oracle.spatial.network.lod.LabelSettingAlgorithm.getElementPartition(LabelSettingAlgorithm.java:520)
    at oracle.spatial.network.lod.LabelSettingAlgorithm.expand(LabelSettingAlgorithm.java:561)
    at oracle.spatial.network.lod.LabelSettingAlgorithm.shortestPath(LabelSettingAlgorithm.java:1362)
    at oracle.spatial.network.lod.NetworkAnalyst.shortestPathHierarchical(NetworkAnalyst.java:2523)
    at oracle.spatial.network.lod.NetworkAnalyst.shortestPathDijkstra(NetworkAnalyst.java:2291)
    at oracle.spatial.network.lod.NetworkAnalyst.shortestPathDijkstra(NetworkAnalyst.java:2268)
    at oracle.spatial.network.lod.NetworkAnalyst.shortestPathDijkstra(NetworkAnalyst.java:2249)
    at calcolopercorsolod.SpWithMultiLinkLevels.main(SpWithMultiLinkLevels.java:135)
    Caused by: java.lang.NullPointerException
    at oracle.spatial.network.lod.NetworkIOImpl.readPartitionFromBlob(NetworkIOImpl.java:542)
    ... 12 more
    I don't understand why the analysis does a query with PARTITION_ID=91 and LINK_LEVEL=2:
    Query String: SELECT BLOB, NUM_INODES, NUM_ENODES, NUM_ILINKS, NUM_ELINKS, NUM_INLINKS, NUM_OUTLINKS, USER_DATA_INCLUDED  FROM PROVA.ITAI11_PBLOB$ WHERE PARTITION_ID = ? AND LINK_LEVEL = ? [91,2]
    Infact I have 512 PARTITION_ID  associated to LINK_LEVEL=1 and only 16 PARTITION_ID associated to LINK_LEVEL=2. Why it search PARTITION_ID=91 with LINK_LEVEL=2? This correspondance doesn't exist.
    Where I wrong?
    Note that if I set LINK_LEVEL=1 it works.
    Thank you very much                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Best approach for RFC call from Adapter module

    What is the best approach for making a RFC call from a <b>reciever</b> file adapter module?
    1. JCo
    2. Is it possible to make use of MappingLookupAPI classes to achieve this or those run in the mapping runtime environment only?
    3. Any other way?
    Has anybody ever tried this? Any pointers????
    Regards,
    Amol

    Hi ,
    The JCo lookup is internally the same as the Jco call. the only difference being you are not hardcoding the system related data in the code. So its easier to maintain during transportation.
    Also the JCO lookup code is more readable.
    Regards
    Vijaya

  • Best approach for syndication in Central MDM

    MDM 7.1
    CE 7.2
    ERP 6 EHP4
    PI 7.1 EHP1
    We are currently developing a custom application using CE/BPM workflow for central maintenance of customer master data. One of the topics under discussion is the right approach for syndication once a record is complete.
    [This |http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/60a3118e-3c3e-2d10-d899-ddd0b963beba?quicklink=downloads&overridelayout=true] SAP document on collaborative material master data creation provides one way to achieve this syndication by first calling a web service from BPM to create record in ERP before checking in MDM. While I am personally fine with the approach, some of the other colleagues aren't too keen on issuing synchronous calls from BPM. Rather, they would like to use the syndication engine of MDM to transmit data to downstream systems (currently only SAP ERP) using Idocs. But there is a caveat here. To use syndication, the record has to be checked in.
    The problem is that if the record is checked in MDM, it is ready for modification. However, the asynchronous call to ERP using Idocs for creation of customer master might fail for any number of reasons. In this case, the MDM record might need a modification before resubmitting to ERP. In the meantime, since the record was checked in before syndication, someone else might have checked it out, potentially resulting in data quality issues. So to avoid this situation, the developer has decided to take the approach to check in -> syndicate -> check out -> wait for confirmation Idoc -> check in if success. This isn't a clean approach to syndicate but might address the record locking issue.
    Another consideration is to design the application with the view that sometime in the future, this master data might have to be syndicated to other SAP and non-SAP systems as well. To ensure syndication to all downstream systems is complete before checking in MDM can be a tricky requirement and might need some complex ccBPM development or evaluating something similar to two-phase commit (might be an overkill). In any case, a best practice approach for keeping downstreams systems in sync with MDM in case of central MDM has to be shared by SAP. So it would be good to have comments of the people who developed the reference application for collaborative material master data creation.
    If there are any customers who have come up with a custom solution which works, please do share the experience.
    Thanks and regards,
    Shehryar

    Thanks Ravi. While there are more than one possible solutions to the immediated problem, I am actually looking for a design pattern which SAP recommends or a customer has developed to address the issues related to synchronization of master data in a Central MDM environment.
    The idea behind a central master data management function, as you know, is that all participating business systems use the same basic master data being authored in MDM. This data has to be synchronized with all participating systems, rather than just one system. To me, a ccBPM workflow or 2 phase commit design pattern seem to be the solution. But it would be good to know how other customers are addressing the issue of master data synchronization with multiple systems, or SAP's recommendations for this issue.
    Regards,
    Shehryar

  • Best approach for publishing a paid version and an ad supported free version of the same app

    Hi,
    One of my Windows 8 store app is almost ready for store submission.
    What is the best approach for publishing a paid version and  an ad supported  free version of the same app.
    Can I do the following
    1. Submit the app with unlimited free trial to store
    2. During the free trial ads will be displayed
    3. If the user purchases the app, then the ad would not displayed
    Any advise is greatly
    appreciated.
    Best Regards

    Although the in-App purchase option is good but for ad based apps my approach is different.
    I would suggest putting 2 different apps in the store, one free with Ads and one without. Reason being you want the extra reference and xaml ad controls depending upon how
    many you have on the paid version of the apps. I would keep my apps as lighter and cleaner as possible specially when its a paid app.
    I currently manage both free and paid app through one solution and reuse most of the code except for the views.
    Binoj Daniel www.CodeRewind.com

  • Recommended approach for validating page content on activation?

    Hi,
    Is there a recommended approach to implementing validation in CQ5 (we are using 5.5) that will run when a Page is activated?  I have been reading up on different approaches for this, but have not been able to find a clear solution.
    The requirement that I have is for a News article to only be published if all of it's mandatory fields have been supplied - for example Heading, Image, Body Text, Author.  Simply implementing validation on the edit dialog doesn't really solve the issue, as each component is editable separately, so it is easy for a content author to miss entering the Author for example.
    What I'm trying to accomplish is to run some custom validation code when the page is replicated, and cancel the replication if the validation does not pass.  Ideally this would show a "Validation Failed" or similar notification in the CQ user interface, the same way that the "Activation Successful" notification is shown.
    The approaches that I have investigated so far are:
    Replication Action Event Handler
    http://helpx.adobe.com/cq/kb/ReplicationListener.html
    - This triggers at the right time and provides sufficient information (path to page), but doesn't provide a mechanism to stop the replication.
    - Potentially this could be combined with further service calls to stop replication?
    Workflow Action Event Handler
    http://dev.day.com/docs/en/cq/current/workflows/wf-extending.html#par_title_0
    - This doesn't seem to fire when activating a page
    Custom Replication Agent
    http://dev.day.com/docs/en/cq/5-5/deploying/configuring_cq/replication.html
    http://forums.adobe.com/message/4785913#4785913#4785913
    - This would provide a hook for validation, but seems like massive overkill - is there no way to simply hook into the existing replication process and cancel it if the page is not valid?
    Any advice on how to best accomplish this would be much appreciated.
    Thanks,
    Ray

    Try replication preprocessor. http://wemcode.wemblog.com/replication-preprocessor
    Yogesh

  • Best approach for uploading document using custom web part-Client OM or REST API

    Hi,
     Am using my custom upload Visual web part for uploading documents in my document library with a lot of metadata.
    This columns contain single line of text, dropdownlist, lookup columns and managed metadata columns[taxonomy] also.
    so, would like to know which is the best approach for uploading.
    curretnly I am  trying to use the traditional SSOM, server oject model.Would like to know which is the best approach for uploading files into doclibs.
    I am having hundreds of sub sites with 30+ doc libs within those sub sites. Currently  its taking few minutes to upload the  files in my dev env. am just wondering, what would happen if  the no of subsites reaches hundred!
    am looking from the performance perspective.
    my thought process is :
    1) Implement Client OM
    2) REST API
    Has anyone tried these approaches before, and which approach provides better  performance.
    if anyone has sample source code or links, pls provide the same 
    and if there any restrictions on the size of the file  uploaded?
    any suggestions are appreciated!

    Try below:
    http://blogs.msdn.com/b/sridhara/archive/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010.aspx
    http://stackoverflow.com/questions/9847935/upload-a-document-to-a-sharepoint-list-from-client-side-object-model
    http://www.codeproject.com/Articles/103503/How-to-upload-download-a-document-in-SharePoint
    public void UploadDocument(string siteURL, string documentListName,
    string documentListURL, string documentName,
    byte[] documentStream)
    using (ClientContext clientContext = new ClientContext(siteURL))
    //Get Document List
    List documentsList = clientContext.Web.Lists.GetByTitle(documentListName);
    var fileCreationInformation = new FileCreationInformation();
    //Assign to content byte[] i.e. documentStream
    fileCreationInformation.Content = documentStream;
    //Allow owerwrite of document
    fileCreationInformation.Overwrite = true;
    //Upload URL
    fileCreationInformation.Url = siteURL + documentListURL + documentName;
    Microsoft.SharePoint.Client.File uploadFile = documentsList.RootFolder.Files.Add(
    fileCreationInformation);
    //Update the metadata for a field having name "DocType"
    uploadFile.ListItemAllFields["DocType"] = "Favourites";
    uploadFile.ListItemAllFields.Update();
    clientContext.ExecuteQuery();
    If this helped you resolve your issue, please mark it Answered

  • Best approach for a cross language application

    I am working on a project where we are planning to write the data acquisition code in LabVIEW, but the rest of the application is being written in C# by some developers that are unfamiliar with LabVIEW. I am looking for suggestions for the best architecture for this kind of application.
    Traditionally, in my LabVIEW applications that require UI, data aquisition, logging, and analysis, I generate a tiered producer consumer architecture. I usually build a queued event driven producer/consumer, and then create additional consumer loops to handle data as it propogates out of the acquisition loop. In this project, I am basically looking to only create the acquisition loop in LabVIEW, with the rest of the "loops" being generated by the C# guys using .NET 4.0 CLR.
    The original plan was to make my loop as I usually would in LabVIEW and build it as a .NET interop. I hadn't really sorted it out yet, but the plan was to basically get configure and start commands from the C# gui (not sure how to replace the queue here), and use some event to get the analysis parts of the program to trigger at appropriate times based on data availability. It's come to my attention that LabVIEW generated .NET interops can not run in 4.0 CLR applications though, so I'm looking for alternatives.
    Basically, I'd like to hear about similar applications and what has worked and not worked. I'm particularly interested in good approaches for interprocess communication between LabVIEW and a .NET app, and also any thoughts on triggering actions in the .NET app from the LabVIEW portion (can this be done without the .NET code polling something?).
    Thanks!
    Chris

    Hi Chris,
    C. Minnella wrote:I'm particularly interested in good approaches for interprocess communication between LabVIEW and a .NET app, and also any thoughts on triggering actions in the .NET app from the LabVIEW portion (can this be done without the .NET code polling something?).
    whenever it comes to communication between windows applications, I don't stop recommending the highly underrated Microsoft Message Queue (MSMQ) infrastructure and in my opinion, it actually screams to be used in your scenario:
    1. Let LabVIEW collect the data and place it into a designated data queue.
    2. Let C# exe attach to queue and do the data retrieval/evaluation/storage/whatever by OnMessageReceived events.
    3. Let C# send control messages to a second queue, that is read by LV.
    MSMQ is increadibly easy to use, yet very powerfull and has so many aspects and benefits for interprocess communication, especially between different machines in a LAN - a real pity that it's so little known. Just have a look at the following thread, especially at the tiny LabVIEW example I've placed there: http://forums.ni.com/t5/LabVIEW/MSMQ-with-Labview/m-p/154334
    This could be done better on the LabVIEW side (event based rather than polling), but as you just want to send some configuration and control commands, it's okay like this.
    Unfortunately, there are not too many good resources about MSMQ on the web that explain the coding basics well.The MSDN magazine has some great articles if you're somewhat experienced (like http://msdn.microsoft.com/en-us/magazine/cc163920.aspx ). What I found really helpful and gives a great introduction is this book: http://amzn.com/1590593464
    Give MSMQ try and have fun with it!
    Cheers,
    Hans

  • Best approach for jtree with each node having data to be displayed in 2rows

    Hi,
    Need directions on approach for constructing a jtree.I am not sure whether this is possible in jtree.
    the format of tree will be as shown below
    +JTREE
    -JTREE
    - Name1 Age 1 Id1
    Address Details1(Should be a button)
    - Name2 Age 2 Id2
    Address Details2(Should be a button)
    Problem here is each child node has two rows.First row has three columns and second row has a button which takes user to new screen on click. Any directions on how to approach this problem will be helpful
    Thanks for your help
    Ravi

    Hi,
    Thanks for the suggestion. Will this approach work, if I have to display a button in the second row and content in the first row (Can these tworows together be given as a treenode)?
    Any sample example code will be useful.
    thanks and regards
    ravi

  • Right Approach for Applications & Projects

    Hi
    We do have a big application with many modules in it spanned across different teams. What's is the right approach for defining Applications ,Model and UI Projects ?Do we need to create applications for each module ? Or its better to use a single application and many Model and many UI projects on it ? Or many model projects and a single UI Project ? There can be dependency with other modules.
    Thanks
    Suneesh

    Sunesh,
    Lots of discussions on the ADF Enterprise Methodology Group on this topic:
    http://groups.google.com/group/adf-methodology/browse_thread/thread/e7c9d557ab03b1cb
    http://groups.google.com/group/adf-methodology/browse_thread/thread/bd0a6ec3255d8a7a#
    http://groups.google.com/group/adf-methodology/browse_thread/thread/01f4aced1f963061#
    http://groups.google.com/group/adf-methodology/browse_thread/thread/3347664e066fc44f#
    http://groups.google.com/group/adf-methodology/browse_thread/thread/ccfe981634569d38#
    http://groups.google.com/group/adf-methodology/browse_thread/thread/218f2f5f51a6f853#
    http://groups.google.com/group/adf-methodology/browse_thread/thread/0fbb4b8a267369de#
    John

  • Design Patterns, best approach for this app

    Hi all,
    i am starting with design patterns, and i would like to hear your opinion on what would be the best approach for this app. 
    this is basically an app for data monitoring, analysis and logging (voltage, temperature & vibration)
    i am using 3 devices for N channels (NI 9211A, NI 9215A, NI PXI 4472) all running at different rates. asynchronous.
    and signals are being processed and monitored for logging at a rate specified by the user and in realtime also. 
    individual devices can be initialized or stopped at any time
    basically i'm using 5 loops.
    *1.- GUI: Stop App, Reload Plot Names  (Event handling)
    *2.- Chart & Log:  Monitors Data and Start/Stop log data at a specified time in the GUI (State Machine)
    *3.- Temperature DAQ monitoring @ 3 S/s  (State Machine)   NI 9211A
    *4.- Voltage DAQ monitoring and scaling @ 1K kS/s (State Machine) NI 9215A
    *5.- Vibration DAQ monitoring and Analysis @ 25.6 kS/s (State Machine) NI PXI 4472
    i have attached the files for review, thanks in advance for taking the time.
    Attachments:
    V-T-G Monitor_Logger.llb ‏355 KB

    mundo wrote:
    thanks Will for your response,
    so, basically i could apply a producer/consummer architecture for just the Vibration analysis loop? or all data being collected by the Monitor/Logger loop?
    is it ok having individual loops for every DAQ device as is shown?
    thanks.
    You could use the producer/consumer architecture to split the areas where you are doing both the data collection and teh analysis in the same state machine. If one of these processes is not time critical or the data rate is slow enough you could leave it in a single state machine. I admit that I didn't look through your code but based purely on the descriptions above I would imagine that you could change the three collection state machines to use a producer/consumer architecture. I would leave your UI processing in its own loop as well as the logging process. If this logging is time critical you may want to split that as well.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

  • What is the best approach for combining events?

    When I work on a wedding my current workflow involves creating a compound clip for each section of the video (e.g. reception, ceremony, dancing etc). Then I add the compound clip 'sequences' into a single project to add the chapter markers and export to a single master file.
    I like the idea of managing each section in a project rather than a compound clip now that projects are part of the library in 10.1, but is there a good way to combine multiple projects (for each section) into a single master project, or would I still need to copy the contents of each project and paste in the master project?
    Maybe I am best to continue with my current workflow.

    Just saw the discussion title - should have said "What is the best approach for combining projects"?

  • Design approach for custom Fiori application

    Dear Experts,
    Good day to all...!
    I am having a query on finalizing design approach for one of my custom Fiori Application development  using SAP UI5.
    Current Application design and  Features:
    As now we are having application, which is been used in laptops. The application structure is like (SAP R3 --> SUP -->UI (using .net) [back-end à Middlewareà UI).
    The UI is hosted on IIS server and the application type is desktop, so the users can use the application in offline as well.
    Once its connected to internet, they push all the data back to SAP via SUP.
    Proposal :
    We are planning to migrate the same application into Fiori with same offline features and extending to mobiles and devices.
    I have few queries here.
    What will be the best approach to deploy the application either SUP(Latest version of SMP) or SAP R3.
    If SAP R3 to deploy App:
    If we choose to deploy the application in R3, How to support the offline usage mobile and devices.
    Will the HTML5 local storage or indexed DB’s are sufficient to support the offline usage.
    In this case, Shall we drop the SUP/SMP, Since the application directly accessed from SAP R3 ..?
    SUP/SMP to deploy the app:
    In this case, I need to create (wrap the ui5 files into hybrid application) a hybrid application to support the mobile and devises as native application..? Correct me If I am wrong..:)
    Hope I can use the SUP/SMP local storage options to support my offline usage..? Correct me If I am wrong..:)
    What will be the best option to support desktop offline  usage.
    We are yet to take a decision on this.. Please provide your valuable inputs , which will help us to take some decisions.
    Thanks & Regards
    Rabin D

    Hi Anusha,
    considering the reusability aspect the components approach is the much better one (see also the best practices dev guide chapter regarding components SAPUI5 SDK - Demo Kit).
    It allows you to reuse that component in different applications or other UI components.
    I also think that the Application.js approch will not work with Fiori, cause the Fiori Launchpad loads the Components.js of the Fiori app in an Component Container.
    Best Regards, Florian

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • The danger of memory target in Oracle 11g - request for discussion.

    Hello, everyone.
    This is not a question, but kind of request for discussion.
    I believe that many of you heard something about automatic memory management in Oracle 11g.
    The concept is that Oracle manages the target size of SGA and PGA. Yes, believe it or not, all we have to do is just to tell Oracle how much memory it can use.
    But I have a big concern on this. The optimizer takes the PGA size into consideration when calculating the cost of sort-related operations.
    So what would happen when Oracle dynamically changes the target size of PGA? Following is a simple demonstration of my concern.
    UKJA@ukja116> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE    11.1.0.6.0      Production
    TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    -- Configuration
    *.memory_target=350m
    *.memory_max_target=350m
    create table t1(c1 int, c2 char(100));
    create table t2(c1 int, c2 char(100));
    insert into t1 select level, level from dual connect by level <= 10000;
    insert into t2 select level, level from dual connect by level <= 10000;
    -- First 10053 trace
    alter session set events '10053 trace name context forever, level 1';
    select /*+ use_hash(t1 t2) */ count(*)
    from t1, t2
    where t1.c1 = t2.c1 and t1.c2 = t2.c2
    alter session set events '10053 trace name context off';
    -- Do aggressive hard parse to make Oracle dynamically change the size of memory segments.
    declare
      pat1     varchar2(1000);
      pat2     varchar2(1000);
      va       number;
      vc       sys_refcursor;
      vs        varchar2(1000);
    begin
      select ksppstvl into pat1
        from sys.xm$ksppi i, sys.xm$ksppcv v   -- views for x$ table
        where i.indx = v.indx
        and i.ksppinm = '__pga_aggregate_target';
      for idx in 1 .. 10000000 loop
        execute immediate 'select count(*) from t1 where rownum = ' || (idx+1)
              into va;
        if mod(idx, 1000) = 0 then
          sys.dbms_system.ksdwrt(2, idx || 'th execution');
          select ksppstvl into pat2
          from sys.xm$ksppi i, sys.xm$ksppcv v   -- views for x$ table
          where i.indx = v.indx
          and i.ksppinm = '__pga_aggregate_target';
          if pat1 <> pat2 then
            sys.dbms_system.ksdwrt(2, 'yep, I got it!');
            exit;
          end if;
        end if;
      end loop;
    end;
    -- As to alert log file,
    25000th execution
    26000th execution
    27000th execution
    28000th execution
    29000th execution
    30000th execution
    yep, I got it! <-- the pga target changed with 30000th hard parse
    -- Second 10053 trace for same query
    alter session set events '10053 trace name context forever, level 1';
    select /*+ use_hash(t1 t2) */ count(*)
    from t1, t2
    where t1.c1 = t2.c1 and t1.c2 = t2.c2
    alter session set events '10053 trace name context off';With above test case, I found that
    1. Oracle invalidates the query when internal pga aggregate size changes, which is quite natural.
    2. With changed pga aggregate size, Oracle recalculates the cost. These are excerpts from the both of the 10053 trace files.
    -- First 10053 trace file
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
    Compilation Environment Dump
    _smm_max_size                       = 11468 KB
    _smm_px_max_size                    = 28672 KB
    optimizer_use_sql_plan_baselines    = false
    optimizer_use_invisible_indexes     = true
    -- Second 10053 trace file
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
    Compilation Environment Dump
    _smm_max_size                       = 13107 KB
    _smm_px_max_size                    = 32768 KB
    optimizer_use_sql_plan_baselines    = false
    optimizer_use_invisible_indexes     = true
    Bug Fix Control Environment10053 trace file clearly says that Oracle recalculates the cost of the query with the change of internal pga aggregate target size. So, there is a great danger of unexpected plan change while Oracle dynamically controls the memory segments.
    I believe that this is a desinged behavior, but the negative side effect is not negligible.
    I just like to hear your opinions on this behavior.
    Do you think that this is acceptable? Or is this another great feature that nobody wants to use like automatic tuning advisor?
    ================================
    Dion Cho - Oracle Performance Storyteller
    http://dioncho.wordpress.com (english)
    http://ukja.tistory.com (korean)
    ================================

    I made a slight modification with my test case to have mixed workloads of hard parse and logical reads.
    *.memory_target=200m
    *.memory_max_target=200m
    create table t3(c1 int, c2 char(1000));
    insert into t3 select level, level from dual connect by level <= 50000;
    declare
      pat1     varchar2(1000);
      pat2     varchar2(1000);
      va       number;
    begin
      select ksppstvl into pat1
        from sys.xm$ksppi i, sys.xm$ksppcv v
        where i.indx = v.indx
        and i.ksppinm = '__pga_aggregate_target';
      for idx in 1 .. 1000000 loop
        -- try many patterns here!
        execute immediate 'select count(*) from t3 where 10 = mod('||idx||',10)+1' into va;
        if mod(idx, 100) = 0 then
          sys.dbms_system.ksdwrt(2, idx || 'th execution');
          for p in (select ksppinm, ksppstvl
              from sys.xm$ksppi i, sys.xm$ksppcv v
              where i.indx = v.indx
              and i.ksppinm in ('__shared_pool_size', '__db_cache_size', '__pga_aggregate_target')) loop
              sys.dbms_system.ksdwrt(2, p.ksppinm || ' = ' || p.ksppstvl);
          end loop;
          select ksppstvl into pat2
          from sys.xm$ksppi i, sys.xm$ksppcv v
          where i.indx = v.indx
          and i.ksppinm = '__pga_aggregate_target';
          if pat1 <> pat2 then
            sys.dbms_system.ksdwrt(2, 'yep, I got it! pat1=' || pat1 ||', pat2='||pat2);
            exit;
          end if;
        end if;
      end loop;
    end;
    /This test case showed expected and reasonable result, like following:
    100th execution
    __shared_pool_size = 92274688
    __db_cache_size = 16777216
    __pga_aggregate_target = 83886080
    200th execution
    __shared_pool_size = 92274688
    __db_cache_size = 16777216
    __pga_aggregate_target = 83886080
    300th execution
    __shared_pool_size = 88080384
    __db_cache_size = 20971520
    __pga_aggregate_target = 83886080
    400th execution
    __shared_pool_size = 92274688
    __db_cache_size = 16777216
    __pga_aggregate_target = 83886080
    500th execution
    __shared_pool_size = 88080384
    __db_cache_size = 20971520
    __pga_aggregate_target = 83886080
    1100th execution
    __shared_pool_size = 92274688
    __db_cache_size = 20971520
    __pga_aggregate_target = 83886080
    1200th execution
    __shared_pool_size = 92274688
    __db_cache_size = 37748736
    __pga_aggregate_target = 58720256
    yep, I got it! pat1=83886080, pat2=58720256Oracle continued being bounced between shared pool and buffer cache size, and about 1200th execution Oracle suddenly stole some memory from PGA target area to increase db cache size.
    (I'm still in dark age on this automatic memory target management of 11g. More research in need!)
    I think that this is very clear and natural behavior. I just want to point out that this would result in unwanted catastrophe under special cases, especially with some logic holes and bugs.
    ================================
    Dion Cho - Oracle Performance Storyteller
    http://dioncho.wordpress.com (english)
    http://ukja.tistory.com (korean)
    ================================

Maybe you are looking for