What is the Best Approach to Display the Top most Accessed linksof a portal

HI Experts,
I have req of Displaying the Top Most accessed links of a portal in EP.
From all the links in the portal, Need to Display the Top Ten  Most Accessed Links BY Users in an Iveiw.
1) When the user clicks on the link, it is stored in a trace file.
2)No of clicks in the Portal are of high Volume.25000+
3)I have created the Java program to seggregete the data.but it uses the buffered input Output operations in the memory. it may memory problems in Production System.
4) can we use BI method to handle the Large Amount of data??
5) do we have any Trace File Extractor kind of Tool to seggregate the data?
Need u r Suggestion to Which Approach i Should Take up BI or EP(Potal Components)??
Pls Suggest u r Veiws on above Req's..
Thank You.,
Mahesh Narkuda

Hi Mahesh,
Using the Portal Activity Report (PAR) will allow you to collect the following data: visited pages and iviews and who visited those pages and iviews.
the data will be collected and flushed to your Database.
The PAR has it's out of the box reports that you can use for your needs.
If you decided to use the Activity Data Collector (ADC) the data will be collected and saved to text based files on your operating system.
The ADC doesn't have an out of the box report system but it is able to collect more types of data and it is more configurable.
Hope this helps.
Best Regards,
Saar Dagan
Edited by: Saar Dagan on Feb 27, 2012 2:16 PM

Similar Messages

  • Is this the best approach to reduce the memory ??

    Hi -
    I have been given a task to reduce the HEAP memory so that the system can support more number of users. I have used various suggestions given in this forum to find out the size of the object in memory. I have reached to a point that where i think i got an approx size of the object in memory.(not 100%)
    I basically have some objects of some other class which are created when this object is created . The intent was to initialize the nested objects once and use them in the main object. I saw some significant difference reduction in size of the object when i create these objects local to the methods which use it.
    Before moving the objects to method level
    Class A {
        Object b = new Object();
        Object c = new Object();
        Object d = new Object();
         public void method1 () {
             b.someMethod();
         public void method2 () {
             b.someMethod();
         public void method3 () {
             c.someMethod();
         public void method4 () {
             c.someMethod();
         public void method5 () {
             d.someMethod();
         public void method6 () {
             d.someMethod();
    After moving the objects to method level
    Class A {
         public void method1 () {
           Object b = new Object();
             b.someMethod();
         public void method2 () {
            Object b = new Object();
             b.someMethod();
         public void method3 () {
           Object c = new Object();
             c.someMethod();
         public void method4 () {
          Object c = new Object();
             c.someMethod();
         public void method5 () {
            Object d = new Object();
             d.someMethod();
         public void method6 () {
            Object d = new Object();
             d.someMethod();
    }Note : This object remains in the http session atleast 2 hrs. I cannot change the session time out.
    Is this the better approach to reduce the heap size? What are the side effects of creating all objects in the local methods which will be on stack?
    Thanks in advance

    The point is not that the objects are on the stack - they aren't, all objects are in heap, but that they have a much shorter life. They'll become unreachable as soon as the method exits, rather than surviving until the session times out. And the garbage collector will probably recycle them pretty promptly, because they remain in "Eden space".
    (In future versions of the JVM Sun is hoping to use "escape analysis" to reclaim such objects even faster).
    Of course some objects might have a significant creation overhead, in which case you might want to consider creating some kind of pool of them from which one could get borrowed for the duration of the call. With simple objects, though, the overheads of pooling are likely to be higher.
    Are these objects modified during use? If not then you might simply be able to create one instance of each for the whole application, and simply change the fields in the original class to static. The decision depends on thread safety.

  • What is the best practices recommended from microsoft to give access a intranet portal from internet externally

    Hi
    what is the best practices recommended from microsoft
    i have a intranet portal in my organization used by employees  and i want to give access for employees to access external from  internet also
    can i use same url  for employees access intranet portal from internally and externally or diffrent url?
    like ( https://extranet.xyz.com.in)  and (http://intranet.xyz.com.in)
    internal url access by employees is( http://intranet.xyz.com.in)
    and this portal configured with claims based authentication
    here i have a F5 for load blance and
     a request from external to F5 is https request and F5 to sharepoint server http request
    and sharepoint server to F5 is http request but F5 to external users it is https response so 
    when i change below settings in alternate access mapings   all links changed to https
    but only authentication link is still showing http and authentication page not opened.
    adil

    Hi,
    One of my clients has an environment similar to yours with an internal pair of F5s and a pair used for the access from the internet. 
    I am only going to focus on the method using an F5 Load Balancer and SSL Offloading. the setup of the F5 will not be covered in detail but a reference to the documentation to support SharePoint and SSL Offloading will be provided
    Since you arte going to be using SSL Offloading you do not need to extend your WebApps to use separate IIS WebSites with Unique IP Addresses
    Configure the F5 with SSL Offloading
    Configure a Internal AAM for SSL (HTTPS) for each WebApp that maps to the Public HTTP FQDN AAM Setting for each WebApp
    Our environment has an additional component we require RSA Authentication for all internet facing Sites. So we have the extra step of extending the WebApp to a separate IIS WebSite and configuring RSA for each extended WebSite.Reference:
    Reference SharePoint F5 Configuration:
    http://www.f5.com/featured/video/ssl-offloading/
    -Ivan

  • What is the best approach to generate control numbers from bpel?

    1. If we want to control ISA/GS/ST control numbers from bpel, what is the best approach to do that?
    2.  how to generate these control numbers and where to store them to get a sequence out of it?
    Thanks,
    Kathar

    Internally Oracle B2B uses DB sequence for generating the control numbers. It is the best approach but at the same time it is not very straight forward, specially in case of clustered database. So you may carefully implement same with BPEL.
    Hi Anuj,
    If we let B2B to generate control numbers in the clustered environment, is there any settings we have to do?
    So you may carefully implement same with BPEL. BTW, what is the use case behind this?
    We were thinking about using this to send out duplicate messages to two TPs but we decided to go with java callout as you suggested in another thread.
    Thanks!
    Kathar

  • Best approach to get the source tables into Target

    Hi
    I am new to Goldengate and I would like to know what is the best approach to get the Source tables to be replicated into the target (oracle to oracle) before performing the initial load without using exp/expdp . Is there any native Goldengate utility which i can use during the initial load or before that will create the tables on the target before loading the data ?
    Thanks

    i dont think so, for the initial load replication your struncture should be available at target machine. once your machines are in sync then you can use goldengate ddl setup for automatically replicate the table with data. 
    Batter approach for you to create a structure on target machine using export improt.  In export use conect=metadata_only for copy the structure only.....
    like
    EXPDP <<user>>/<<password>>@connection_string schemas=abc directory=TEST_DIR dumpfile= gg.dmp Content = metadata_only logfile= gg.log

  • Best approach to replicate the data.

    Hello Every one,
    I want to know about the best approach to replicate the data.
    First i clear the senario , i have one oracle 10g enterprise edition database and 20 oracle 10g standerd edition databases.The enterprise edition will run at center and the other 20 standered edition databases will run at sites.
    The data will move from center to sites and vice versa not between sites.There is only one schema to replicate with more than 500 tables.
    what will be the best for replication (Updateble MVs or Oracle Streams or any thing else.),its urgentpls.
    Thanx in advance.
    Edited by: user560669 on Dec 13, 2009 11:01 AM

    Hello,
    Yes MV or Oracle Stream are the common ways to replicate datas between databases.
    I think that in your case (you have to replicate a whole Schema) Oracle Streams is interresting (it's not so easy
    to maintain 500 MV).
    But you must care of the type of Edition.
    I'm not sure that Standard Edition allows Advanced replication features. It seems to me (but I may be wrong)
    that Updatable MV is an EE features.
    About the Stream It seems to be available even in SE.
    Please, find enclosed some links about it:
    [http://www.oracle.com/database/product_editions.html]
    [http://www.oracle.com/technology/products/dataint/index.html]
    Hope it can help,
    Best regards,
    Jean-Valentin

  • Definition of the best approach on how to do reporting between BPC and BW

    Hi,
    I need your opinion in the definition of the best approach on how to do reporting between BPC and BW.
    For example if we want to do reporting using BW on Actuals Vs Budget how should we manage this since technically BPC Model and BW InfoCube is different?
    BPC Models have the Budget and BW has the actuals, but the InfoObject that is used for Account is different. What is the best approach to do the reporting?
    Thanks in advance,
    JA

    Hi Gersh
    I already thought in that option, but the problem is the Yellow requests in the Infocube that are not used by VP.
    In the past I used Report RSAPO_CLOSE_TRANS_REQUEST_ALL3 in the virtual function module to close the requests, but now I didn't want to use VP based on function module.
    Is there any option to use data in Yellow requests in VP based on DTP?
    Best regards,
    JA

  • The best  way of carrying the search string across different jsp pages?

    I heard about transfer object. What about carrying object with session for different pages.
    Please suggest me the best approach to
    carry the search string across different jsp pages?
    thanks
    vijendra

    I doubt its possible even with a fancy HTML widget, although the last iBA update now allows links to other books.

  • What is the best approach to handle multiple FK with single table.

    If two tables are joined with each other with more than one ways, for example
    MAIN table is (col1, col2,....coln, person_creator_id, person_modifier_id)
    PERSON table is (person_id, name, address,........ phone) etc
    At database level PERSON_CREATOR_FK and PERSON_MODIFIER_FK are defined.
    Objective is to create a report that shows
    col1, col2...coln, person creator name, person modifier name
    If above two objects are imported with FKs in a EUL and discoverer plus is used to create above report. On first inclusion of person name discoverer plus will ask you to pick the join (provided the checkbox to disable this feature is not checked). Once you pick 'person creator' join it will never allow you to pick person modifier name.
    One solution is two create a custom folder with query like
    select col1, col2,...coln,
    pc.name, pc.address,.... pc.phone
    pm.name, pm.address,.... pm.phone
    from main m,
    person pc,
    person pm
    where m.person_id_creator = pc.person_id
    and m.person_id_modifier = pm.person_id
    Second solution is to import the PERSON folder twice in EUL (optionally named one as perosn_creator and other as person_modifier) and manually define one join per table. i.e. join MAIN with PERSON_CREATOR on person_creator_fk and join MAIN with PERSON_MODIFIER table using person_modifier_fk.
    Now discoverer plus will let you drag Name from each person folder without needing to resolve multiple joins.
    Question is, what approach is better OR is there a better way?
    With solution 1 you will not be able to use functions on folder items.
    With solution 2 there is a EUL design overhead of including same object multiple times and then manually defining all join (or deleting unwanted joins), and this could be a problem when you have person_modifier and person_creator in nearly all tables. It could be more complicated if person table is further linked other tables and users want to see that information too. (for instance, if person address is stored in LOCATION table joined with location_id and user want to see both creator address and modifier address....now you will have to create multiple LOCATION folders).
    A third solution could be to register a function in discoverer that return person name when person_id is passed. This will work perfectly for above requirement but a down side is the report will run slower if they need filters on person names (then function will be used in where clause). Also, this solution is very specific to above scenario, it will not work if you want the report developer the freedom to pick any attribute from person table (lets say, person table contain 50 attributes then its not a good idea to register 50 functions).
    Any comments/suggestion will be appreciated.
    thanks

    Hi
    In a roundabout way you have really answered your own question :-)
    In my opinion, the best approach, although by all means not the only approach - see below) would be to have the object loaded as two folders with one join going to the first folder and the second join to the other folder. You would of course name the folders appropriately.
    Here's a workflow that I use all of the time and one that I teach when I'm giving Discoverer Administrator training. It might help you:
    1. Bring in the PERSON folder to begin with
    2. Make all necessary adjustments to bring it up to deployment standard. These adjustments would be: folder name (E.g PERSON_CREATOR), item names, item placement, default positions, default aggregation and so on.
    3. Create or assign the required lists of values
    4. Create any required calculations
    5. Create any required conditions
    6. Create the first join from this folder to MAIN.
    7. Click on the heading for the folder and press CTRL-C.
    8. Click on the heading for the business area and press CTRL-V. A second copy of the folder, complete with all of the adjustments you made earlier will be inserted into the business area.
    Note: joins are not copied, everything else is.
    9. Rename this folder to say PERSON_MODIFIED
    10. Rename the items as appropriate
    11. Add a join from this folder to MAIN - you're done
    Other ideas that I have used and work well would be to use a database view or create a complex folder. Either will work, In both cases you would need to join on some other column other than the ones you referred earlier.
    I hope this helps
    Best wishes
    Michael

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • What is the best way to display a Interactive Report having 20 columns

    Hi,
    I am having a Interactive Report having many columns (around 20).
    What is the best way to display that report...by default we have to scroll it horizontally to see all the columns.
    I want to avoid Horizontally Scrolling.
    Thanks,
    Deepak

    Hello Deepak,
    You mean apart from using a smaller font size or a bigger monitor?
    You can think about combining / wrapping columns (so more data in one column).
    Or hide some less important data and show that only on demand.
    Greetings,
    Roel
    http://roelhartman.blogspot.com/
    You can reward this reply by marking it as either Helpful or Correct ;-)

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • What is the best approach to converting LV7.1 tags to LV2012 shared variables in multiple VIs?

    What is the best approach to upgrading from LV7.1/DSC tags to LV2012/DSC shared variables, in multiple VIs running on multiple platforms? Our system is composed of  about 5 PCs running Windows 2000/LV7.1 Runtime, plus a PLC, and a main controller running XP/SP3/LV2012. About 3 of the PCs publish sensor information via tags across the LAN to the main controller. Only the main controller is currently being upgraded. Rudimentary questions:
    1. Will the other PCs running the 7.1 RTE (with tags) be able to communicate with the main controller running 2012 (shared variables)?
    2. Is it necessary to convert from tags to shared variables, or will the deprecated legacy tag VIs from LV7.1 work in LV2012?
    3. Will all the main controller VIs need to be incorporated into a project in order to use shared variables?
    4. Is the only way to do this is to find all tag items and replace them with shared variable items?
    Thanks in advance with any information and advice!
    lb
    Solved!
    Go to Solution.

    Hi lb,
    We're glad to hear you're upgrading, but because there was a fundamental change in architecture since version 7.1, there will likely be some portions that require a rewrite. 
    The RTE needs to match the version of DSC your using.  Also, the tag architecture used in 7.1 is not compatible with the shared variable approach used in 2012.  Please see the KnowledgeBase article Do I Need to Upgrade My DSC Runtime Version After Upgrading the LabVIEW DSC Module?
    You will also need to convert from tags to shared variables.  The change from tags to shared variables took place in the transition to LabVIEW 8.  The KnowledgeBase Migrating from LabVIEW DSC 7.1 to 8.0 gives the process for changing from tags to shared variables. 
    Hope this gets you headed in the right direction.  Let us know if you have more questions.
    Thanks,
    Dave C.
    Applications Engineer
    National Instruments

  • Newbie: What is the best approach to integrate BO Enterprise into web app

    Hi
    1. I am very new to Business Objects and .Net. I need to know what's the best approach
    when intergrating bo into my web app i.e which sdk do i use?
    For now i want to provide very basic viewing functionality for the following reports :
    -> Crystal Reports
    -> Web Intellegence Reports
    -> PDF Reports
    2. Where do i find a standalone install for the Business Objects Enteprise XI .Net providers?
    I only managed to find the wssdk but i can't find the others. Business Objects Enteprise XI
    does not want to install on my machine (development) - installed fine on server, so i was hoping i could find a standalone install.

    To answer question one, you can use the Enterprise .NET SDK for each, though for viewing Webi documents it is much easier to use the opendocument method of URL reporting to view them.
    The Crystal Reports and PDF instances can be viewed easily using the SDK.
    Here is a link to the Developer Library:
    [http://devlibrary.businessobjects.com/]
    VB.NET XI Samples:
    [http://support.businessobjects.com/communityCS/FilesAndUpdates/bexi_vbnet_samples.zip.asp]
    C# XI Samples:
    [http://support.businessobjects.com/communityCS/FilesAndUpdates/bexi_csharp_samples.zip.asp]
    Other samples:
    [https://boc.sdn.sap.com/codesamples]
    I answered the provider question on your other thread.
    Good luck!
    Jason

  • What´s the best approach to work with Excel, csv files

    Hi gurus. I got a question for you. According to your experience what's the best approach to work with Excel or csv files that have to be uploaded through DataServices to you datawarehouse.
    Let's say your end-user, who is not a programmer, creates a group of 4 excel files with different calculations in a monthly basis, so they can generate a set of reports from their datawarehouse once the files have been uploaded to tables in your DWH. The calculations vary from month to month. The user doesn't have a front-end to upload the excel files directly to Data Services. The end user needs to keep a track of which person uploaded the files for a determined month.
    1. The end user should place their 4 excel files in a shared directory that will be seen by DataServices.
    2. DataServices will execute certain scheduled job that will read the four files and upload them to the Datawarehouse at a determined time, lets say at 9:00pm.
    It makes me wonder... what happens if the user needs to present their reports immediately so they can´t wait until 9:00pm.  Is it possible for the end user to execute some kind of action (out of the DataServices Environment) so DataServices "could know" that it has to process those files right now, instead of waiting for the night schedule?
    Is there a way that DS will track who was the person who uploaded those files?
    Would it be better to build a front-end for the end user so they can upload their four files directlyto the datawarehouse?
    Waiting for your comments to resolve this dilemma
    Best Regards
    Erika

    Hi,
    There are functions in DS that captures the input files automatically. You could use file_exists() or wait_for_file() option to do that. Schedule the job to run every certain minute and if the file exists then run. This could be done by using a certain file name with date and timestamp etc or after running move the old files to archive and DS wait for new files to show up.
    Check this - Selective Reading and Postprocessing - Enterprise Information Management - SCN Wiki
    Hope this helps.
    Arun

Maybe you are looking for