Best practice for modelling rediects using the threat modelling tool

I'm in the process of modelling systems using the threat modelling tool.
There services in question do a lot of redirecting and handing off to one another on the client side, including things like ACS and identify providers.
If Web App A redirects to Web App B what is the best way to draw this?
1) App A (process) > HTTPs redirect > browser > request > App B (process) or 
2) Can I just model it as HTTPS from A to B?
Obviously 2 simplifies diagrams hugely , but does that then exclude a range of potential threats or does the tool cater for this implicitly?

Storing documents outside the web root and using
<cfcontent> to push their contents to the users is the most
secure method.
Putting the documents in a subdirectory of the web root and
securing that directory with an Application.cfm will only protect
.cfm and .cfc files (as that's the only time that CF is involved in
the request). That is, unless you configure CF to handle every
request.
The virtual directory is no safer than putting the documents
in a subdirectory. The links to your documents are still going to
look like:
http://www.mysite.com/virtualdirectory/myfile.pdf
Users won't need to log in to access these documents.
<cfcontent> or configuring CF to handle every request
is the only way to ensure users have to log in before accessing
non-CF files. Unless you want to use web-server
authentication.

Similar Messages

  • Best practice for multiple instances of the same BEX query

    Hi there,
    I'm wondering what's the best way to use multiple instances of the same BEX query. Let me explain what I mean:
    I have a dashboard with different queries feeding different period of time such as: week to date, month to date and so on. One query for each since it is based on a user exit.
    For each query I want to show different data in different sections of my dashboard. Per example: sales per directors or sales per customer group, sales per day, sales per week and the like.I tried to connect a simple bar chart via a direct connection but with no success due to the multiple lines generated by the addition of the sales director, customer group, week number and so on.
    My question is about the way to connect the different queries efficiently in order to show the different data while avoiding multiple useless lines.
    The image above shows the query browser where, per example, for a Month to date query there will be mutiple line for each week as well as one line for each director. If, for two different components, I want to show data per week and data per director or other representation what is the best practice:
    Add another instance of the same query and only put the week information and another one will only the director info?
    Should I bind those to the excel file and use formulas to make final calculations?
    Will there be a performance issues for adding different instances of the same query
    I have 6 different queries (read 6 user exit that filters time via user exit).
    Depending on the best practices there might be 4 instances for each for a total of 24 instances in the query browser.
    I hope my question is clear enough, if not please do not hesitate I'll clarify as much as possible.
    Regards,
    Steve

    Hi Steve,
    Might be trying for solution for a long time, If i understood your question clear let me clarify you few points.
    You are trying to access the bex query which is designed with the exit's in the background based on the logic and trying to call the entire dimensions and key-figures in a single connection. Then you are trying to map those data in the charts.
    Steve, try to make more connections based upon the logic and split them. use the same query but split them by sales per customer group, sales per day, sales per week by making three different connections and try. You can merge the prompts from all connections.
    Hope this Helps!!!
    Sorry if i misunderstood your question.
    --SumanT

  • Best Practices for Team Collaboration using Adobe Captivate

    With a team of 6 Instructional Designers, how can Adobe Captivate be approached where we can collaborate while producing e-learning material while maintaining a consistent look and feel of the e-learning we produce?
    What are the best practices for a team of 6 IDs working and creating e-learning material in Captivate?  Is there anything build-in that allows us to connect to the same libraries, templates, etc to share?  
    Please advise.
    Thank you!
    Susanne

    Only some tips, never collaborated with someone else, being the solo teacher. You didn't mention which version you are using, what I write here is meant for CP7.
    Be sure to prepare a theme and/or a template that will be used by everyone. A theme consists of master slides, object styles, skin editor. Master slides can have custom navigation shape buttons.  In a template you can eventually also prepare different slides with placeholders, and eventually advanced actions etc. For CP6 and earlier that is the only way to reuse advanced actions, in Captivate 7 you can export shared actions that can be imported in any project for reuse.
    A feature that few users know about are the external libraries. You can open the library of any project as an external library in another project. That is a good idea to store assets that you want to use in different projects: images, audio clips, video clips, eventually equations. The shared actions in a library can not (yet?) be used in another project however.
    If you are on CP7 you have automatically the roundtripping with source Adobe Photoshop files and source Audition files, both from CC. That can also make collaboration lot easier if those assets are prepared in those applications. Will not expand on that, because I'm not sure you have the Creative Cloud applications.
    Those are my two cents.
    Lilybiri

  • Best Practices for multiple authors using single project?

    We are having many issues, particularly with moving, renaming, and multiple check out warnings.  We have a single project with many authors and it seems like RH is not designed to work that way. There is an article in the RH devnet-archive in an article entitled "Sharing RoboHelp Project Among Multiple Authors" that says"
    "At first there may be the temptation to let every author work on every file in a project.  This is certainly not a best practice. Regardless of source contraol, it is always best to designate certain authors as owning certain content-related sections, folders, or topics within a project -particulary at the folder level."
    This statement, and our experience seems to indicate that RH is not a true CMS as we had envisioned.  What are the best practices for this scenario to avoid stepping on each others toes and having problems with source control.

    I have moved this to the source control forum for the gurus there to answer.
    Meantime I must admit I read that statement the same way as you first time. However, on rereading I think what the author is saying is not that what you want cannot be done, rather it is best practice to guide authors to work in discrete areas.
    I will leave it to the author or another guru to give you a more complete answer.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • What are the commands for compiling c++ using the command line tools for xcode?

    Hi, I am taking a class in school for c++ and i would like ot be able to practice at home i found the command line tools for xcode and went ahead and installed it on my computer. now i need to know the commands and procedure to be able to compile and run c++.

    c++ testfile.cc

  • Best practice for BI-BO 4.0 data model

    Dear all,
    we are planning to upgrade BOXI 3.1 to BO 4.0 next year and would like to know if Best Practice exists for BI data model. We find out some general BO 4.0 presentations and it seems that enhancements and changes have been implemented: our goal would be to better understand which BI data model best fits the BO 4.0 solution.
    Have you find documentations or links to BI-BO 4.0 best practice to share ?
    thanks in avance

    Have a look in this document:
    http://www.sdn.sap.com/irj/sdn/index?rid=/library/uuid/f06ab3a6-05e6-2c10-7e91-e62d6505e4ef#rating
    Regards
    Aban

  • Best Practice for SOA-oc4j using more than 4 GB

    Hi,
    is there any best practice how to use the SOA container with more than 4GB?
    Oracle ships only java with 32 Bits...

    If you need to modify the myRIO FPGA personality you have a few options.
    The best option is to start with the myRIO FPGA sample project, add and remove components as needed and then build your bitfile.  Any registers (LV FPGA controls / indicators) you don't modify will still work with the Advanced IO VIs and Express VIs.  In order to use the new bitfile (FPGA Personality) you'll need to update the Open FPGA VI Reference in myRIO v1.1 Open.vi (LabVIEW 2013\vi.lib\myRIO\Common\Instrument Driver Framework\myRIO v1.0\myRIO v1.1 Open.vi).
    After doing this any time you use a myRIO Express VI or Advanced IO VI it will use your custom bitfile.  Any peripheral channels you've left in place will continue to work.  Any channels you've removed will still show up in the VIs, but will not work (they will probably throw errors at runtime) and any new channels you added will not show up in the VIs.  For new channels you'll need to use the FPGA Read / Write nodes to read and write the configuration and data register you created in the FPGA personality.  These changes will persist on that computer until you change the Open FPGA VI Reference back to the original bitfile.
    Let us know if you have questions about any of this.
    Thanks!
    -Sam K
    LabVIEW Hacker
    Join / Follow the LabVIEW Hacker Group on google+

  • What are some best practices for Effective Sequences on the PS job record?

    Hello all,
    I am currently working on an implementation of PeopleSoft 9.0, and our team has come up against a debate about how to handle effective sequences on the job record. We want to fully grasp and figure out what is the best way to leverage this feature from a functional point of view. I consider it to be a process-related topic, and that we should establish rules for the sequence in which multiple actions are inserted into the job record with a same effective date. I think we then have to train our HR and Payroll staff on how to correctly sequence these transactions.
    My questions therefore are as follows:
    1. Do you agree with how I see it? If not, why, and what is a better way to look at it?
    2. Is there any way PeopleSoft can be leveraged to automate the sequencing of actions if we establish a rule base?
    3. Are there best practice examples or default behavior in PeopleSoft for how we ought to set up our rules about effective sequencing?
    All input is appreciated. Thanks!

    As you probably know by now, many PeopleSoft configuration/data (not transaction) tables are effective dated. This allows you to associate a dated transaction on one day with a specific configuration description, etc for that date and a different configuration description, etc on a different transaction with a different date. Effective dates are part of the key structure of effective dated configuration data. Because effective date is usually the last part of the key structure, it is not possible to maintain history for effective dated values when data for those configuration values changes multiple times in the same day. This is where effective sequences enter the scene. Effective sequences allow you to maintain history regarding changes in configuration data when there are multiple changes in a single day. You don't really choose how to handle effective sequencing. If you have multiple changes to a single setup/configuration record on a single day and that record has an effective sequence, then your only decision is whether or not to maintain that history by adding a new effective sequenced row or updating the existing row. Logic within the PeopleSoft delivered application will either use the last effective sequence for a given day, or the sequence that is stored on the transaction. The value used by the transaction depends on whether the transaction also stores the effective sequence. You don't have to make any implementation design decisions to make this happen. You also don't determine what values or how to sequence transactions. Sequencing is automatic. Each new row for a given effective date gets the next available sequence number. If there is only one row for an effective date, then that transaction will have a sequence number of 0 (zero).

  • Best Practices for creating PDFs using PLPDF?

    Does anyone have any suggestions for Best Practices in making PDF files using PLPDF?
    I have been using it for about a month now, and the best that I have come up with is to use MS Access to prototype the layout of a report. Once I have all the graphics areas and text areas lined up how I would want them, I then write PLSQL code to create a procedure which is called from an HTMLDB page. MS Access is handy in that it provides the XY coordinates for each text area and graphics area. It also provides the dimensions of the respective cells. So long as I call plpdf.Init('P', 'in', 'letter') at the beginning of the procedure, both my MS Access prototype and my plpdf code are both using inches - this makes the translation relatively easy.
    Has anybody found anything else easier/better?
    Regards,

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • Best Practices for configuring ICMP from the outside

    Question,
    Are there any best practices or best recommendations on how ICMP should be configured from the outside? I have been cleaning up the rules on our ASA as a lot were simply ported over years ago when we retired our PIX. I noticed that there is a rule to allow ICMP any any and began to wonder how this works when the rules above are specific IP addresses and specific ports. This in thurn started me looking to see if there was any documentation or anything to help me determine a best practice. Anyone know of anything?
    As a second part how does this flow on a firewall if all the addresses are natted? It the ICMP traffic simply passed through the NAT and the destiantion simply responds?
    Brent                   

    Here you go, bro!
    http://checkthenetwork.com/networksecurity%20Cisco%20ASA%20Firewall%20Best%20Practices%20for%20Firewall%20Deployment%201.asp#_Toc218778855
    access-list inside permit icmp any any echo
    access-list inside permit icmp any any echo-reply
    access-list inside permit icmp any any unreachable
    access-list inside permit icmp any any time-exceeded
    access-list inside permit icmp any any packets-too-big
    access-list inside permit udp any any eq 33434 33464
    access-list deny icmp any any log
    P/S: if you think this comment is useful, please do rate them nicely :-)

  • For elements 6 Using the Quick Select Tool- Best Settings

    I would like to hear feedback on best setting using elements 6 for the Quick Select Tool- Trying to remove a background for a Product Image. 
    Thanks!

    Perhaps you could post your sample image here to get informed suggestions.
    There are many ways to delete the backgrounds but they all depend on the type and complexity of the photo.
    Good luck.

  • Best practices for building menus using resource bundles?

    Greetings; I am curious to find out what the current best practices people are using to build menus/menu bars using resource bundles, specifically ListResourceBundle.
    What I am trying to figure out is how best to write my Swing application so it does not need to know what menu items it needs to grab from the resource bundle.
    The only idea I have come up with is this:
    class MyBundle extends ListResourceBundle {
    private Object[][] contents = {
            {"menubar", { {"menu.file.item", "blah"}, ....} }
    }Inside the GUI class:
    Object[][] menubar = resourceBundle.getObject("menubar");I would then iterate over the menu bar items and build the menu. I would have to use a naming scheme and then parse appropriately to know when to start a new menu, when a submenu occurs, etc.
    Is this the common practice, or does anyone know of a more clever way of doing this? I've searched various FAQs and googled about, but I have yet to come across any sort of tutorial or page that covers this.

    Anyone have any input on this? Am I close to the solution people are
    using out in real production environments?

  • Flash Pro CS6 - What is Best Practices for where to put the Document Class?

    Suppose you are given a folder with these 5 things in it:
    folder: bin
    folder: src
    folder: lib
    folder obj
    ProjectName.as3proj
    I am right now just creating a .fla called ProjectNameShell.fla that is basically an empty file. I just use it to publish through CS6 and also state which class to start from. Is the standard place to put this fla in this first folder or in src?

    Looks like a FlashDevelop project. In that I'd put FLA and project-specific classes in /src. Any 3rd party code in /lib. Publish to /bin. /obj is the wildcard, which could be complex objects (3d, etc). Some people choose to keep the FLA in the root folder (not in any of those folders) and leave /src to just project-specific code (mostly for easy repository usage). 
    There's no "perfect" way to do it. Just be on the same page with the people you work with.

  • FAQ: Are there best practices for building projects within the 20 page/state limit?

    The 20 page/state limit in Flash Catalyst is there to prevent Catalyst (and your finished application) from running slowly. You can however build efficient applications that have more than 20 states by using custom components. Custom components can contain states as well; so by creating an app that has several states, and using custom components that have states, you can get more unique views of your app while keeping it efficiently built.
    Try this:
    1. Select some of your arwork where you need more states. Right click and choose "Convert to Custom Component"
    2. Double-click to edit the custom component. Note that you can now create states in the custom component. Try creating a few states here.
    3. To exit editing the component, double click a blank area on the artboard.
    4. If you try creating an "On click transition to state" interaction now you will see that you can choose from both the states of the application as well as the states of the custom component.
    Answered by: Ty Voliter. See entire discussion.
    More help:
    Video tutorial on custom components
    Video/demo discussing the benefits of "pushing interactivity down" into custom components, by Ian Giblin @ MAX 2009
    (jump to the 11:15 mark, and watch through to about 20:30)
    Another forum post by Ty:
    Here's an ASCII diagram to illustrate one way of pushing application or top-level pages/states down into a custom component:
    Before:
    State1     State2     State3     State4     State5     State6     State7...
    After (project refactored to push three top-level states down into a custom component):
    State1     State2     State3      State7...
                        |
      Custom Component
    State4     State5     State6
    For some projects this can be an inconvenient workaround, and for others it can really clean things up by reducing complexity. If you have several top-level pages in your app that are completely different than other pages (for example, they don't share any objects with the other states) than pushing these down into a custom component makes things a lot more manageable in Catalyst. The layers panel, for example, shows all objects across all states-- this is useful when your states share many objects. If however, your states don't share a lot of objects, the layers panel can get a bit unmanageable. Refactoring into a custom component allows you to fix this by grouping content into a container.

    The 20 page/state limit in Flash Catalyst is there to prevent Catalyst (and your finished application) from running slowly. You can however build efficient applications that have more than 20 states by using custom components. Custom components can contain states as well; so by creating an app that has several states, and using custom components that have states, you can get more unique views of your app while keeping it efficiently built.
    Try this:
    1. Select some of your arwork where you need more states. Right click and choose "Convert to Custom Component"
    2. Double-click to edit the custom component. Note that you can now create states in the custom component. Try creating a few states here.
    3. To exit editing the component, double click a blank area on the artboard.
    4. If you try creating an "On click transition to state" interaction now you will see that you can choose from both the states of the application as well as the states of the custom component.
    Answered by: Ty Voliter. See entire discussion.
    More help:
    Video tutorial on custom components
    Video/demo discussing the benefits of "pushing interactivity down" into custom components, by Ian Giblin @ MAX 2009
    (jump to the 11:15 mark, and watch through to about 20:30)
    Another forum post by Ty:
    Here's an ASCII diagram to illustrate one way of pushing application or top-level pages/states down into a custom component:
    Before:
    State1     State2     State3     State4     State5     State6     State7...
    After (project refactored to push three top-level states down into a custom component):
    State1     State2     State3      State7...
                        |
      Custom Component
    State4     State5     State6
    For some projects this can be an inconvenient workaround, and for others it can really clean things up by reducing complexity. If you have several top-level pages in your app that are completely different than other pages (for example, they don't share any objects with the other states) than pushing these down into a custom component makes things a lot more manageable in Catalyst. The layers panel, for example, shows all objects across all states-- this is useful when your states share many objects. If however, your states don't share a lot of objects, the layers panel can get a bit unmanageable. Refactoring into a custom component allows you to fix this by grouping content into a container.

  • What is the best practice for storing iPads over the summer?

    Over the summer break do iPads need a maintenance charge or can they be charged to full, stored over the summer, and then a week or so before school charged again for deployment? Thanks.

    Generally, it's not a good idea to charge Li-P batteries fully prior to long-term storage. A more correct method is to have them charged between 40-60% before long-term storage. Be sure to shut off the devices fully by holding the power button down until the Power Off screen appears.

Maybe you are looking for