Securing with NAT - Best Practice ?

Hi,
It is forbidden to do NAT Exempt from Internal to DMZ ?
I hear there is a compliance in banking that 2 server who needs to communicate but its forbidden to know each other ip address ?
How about NAT as second layer or firewall ?
What is best practice to secure enterprise network from NAT point of view ?
Thx

Hello Ibrahim,
No, not at all, that is not a restriction at all. You can do it if needed.
Now looks like in your enviroment is a requirement that this 2 servers communicate with each other but they will not know each other IP address.
Then NAT is your friend as will satisfy the requirement you are looking for.
Well I do not consider NAT to be a security measure as for me it does not perform any inspection, any rule set any policy ,etc but I can ensure you there are a lot of people that think about it as a security measure.
I see it as an IP service that allows us to preserve the IP address space.
For more information about Core and Security Networking follow my website at http://laguiadelnetworking.com
Any question contact me at [email protected]
Cheers,
Julio Carvajal Segura

Similar Messages

  • Secure my laptop Best Practice idea requests

    My MacBook was stolen with all my personal information unencrypted last month and I now have a new 13 inch MacBookPro. I would like some Best Practice recommendations for securing the data within my user account. Is there a BIOS level password option on the Apple laptops?
    Any thoughts on Identity theft? LoJackforLaptops software tracking? Is Apple's encryption of the home directory stable enough to use routinely and how does it affect back up of data and recovery of data? How about online backup...Mozy vs Carbonite or Others. I had Mozy and it seems that much less data was actually available to recover than I had thought.
    Or is this a case of the Cow is out of the barn and why shut the door now?!
    Thoughts please!
    Thanks
    Warren Tripp
    Madison,WI

    Warren Tripp wrote:
    I am NOT going to use FileVault however. I tried it once and lost data. Everything I read seems to imply it is not worth the trouble.
    RE encryption, eww is correct - that's the only way to protect your data. Competent individuals (Kappy, eww, and me, for example), could defeat the firmware password protection and your strong admin password in a matter of minutes. A competent thief +who was interested in your data+ would be able to do so as well (most just want the hardware, of course).
    I do agree that FileVault is not the best solution here (I sometimes refer to it as FileFault - there's an inherent risk in having all of your data in a single, huge, encrypted file). I see no need to encrypt iTunes music, my personal photos, etc. Instead, consider creating an encrypted disk image for your sensitive personal data (again with a strong password, and UNcheck the box to store the password in the keychain!).
    http://support.apple.com/kb/HT1578

  • Running bgp with provider, best practices

    Hi all
    We have recently got a link from a provider to give us point to point between 2 offices, the provider is running bgp to us.
    What best practices should I do when configuring this! At the moment we have connectivity, with basic neighbour statements etc.
    What things should I so for security and to protect my environment from the provider etc?
    Cheers
    Carl

    Hi,
    This is a very well concern for a provider and Customer as CE-PE connectivity is the connection between two different entities. But when we talk about the CE-PE connection what all we can prevent:
    1. Securing BGP neighbor ship with enabling password
    2. Preventing Excessive Route flooding
    3. Securing the date over an MPLS VPN network
    For detail on these refer the below document:
    http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/WAN_and_MAN/L3VPNCon.html#wp309784
    Hope it answers your query.
    Thanks & Regards
    Sandeep

  • Dealing with Drobo (best practices?)

    I have two second generation Data Robotics Drobos, and have been using them under 10.6 on a MacBook via USB. Like many Drobo users, I have had various "issues" over the years, and even suffered 1TB of data loss probably related to the USB eject bug that was in Mac OS X 10.6.5-10.6.7. I have also used the Drobos on a Mac with FireWire.
    My Drobos are set up as 1TB volumes, so my 4x2TB unit shows six 1TB volumes. Using DiskWarrior on some of my volumes has reported "speed reduced by disk malfunction" and DW was unable to rebuild the directory. I fear for my data, so I have been in the process of moving data away from the drive and starting fresh.
    I would like to use this discussion to see what "best practices' others have come up with when dealing with a Drobo on a Mac.
    When I first set up the Drobo, the documentation stated that the unit would take longer to startup if using one big partition, so I chose the smallest value -- 1TB. This initially gave me a few Drobo volumes to use, and as I swapped in larger hard drives, Drobo would start adding more 1TB volumes. I like this approach, since it lets me unmount volumes I am not using (so iMovie does not have to find every single "iMovie Events" I have across 12TB of drives).
    This was also a good way to protect my data. When my directory structure crashed, and was unrepairable, I only lost 1TB of data. Had that happened on a "big" volume Drobo, I would have lost everything.
    Data Robotics own KB articles will tell you to never use Disk Utility to partition a Drobo, but other KB articles say this is what you must do to use TimeMachine... Er? And, under 10.7, they now say don't do that, even for TimeMachine. Apparently, if your parititoned under 10.6 or earlier, you can still use your TimeMachine backup under 10.7, but if you are 10.7 only, you have to use some Time Tamer utility and create a sparsebundle image -- and then you cannot browse TimeMachine backups (what good is that, then?).
    It's a mess.
    So I am looking for guidance, tips, suggestions, and encouragement. I will soon be resetting one of my Drobos and starting fresh, then after I get everything working again, I will move all my data over to it, and reset my second Drobo.

    I have been trying to do either.
    right now i have the images download when the cell is created and then stored into an NSMutable Array. the array is initially populated with a NSString value of the url to the image. I then test to see if the object at the current TableView index is a UIIimage, If not i download the image and replace the existing NSString with the UIImage in the array.
    -(UIImage*) newUIImageWithURLString:(int)urlString
    if (![[imgarr objectAtIndex:urlString] isKindOfClass: [UIImage class]])
    NSLog(@"image not there");
    UIImage *img2get = [[UIImage alloc] initWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:[imgarr objectAtIndex:urlString]]]];
    [imgarr replaceObjectAtIndex:urlString withObject:img2get];
    [img2get release];
    return [imgarr objectAtIndex:urlString];
    Works fairly well but does stall the scrolling when i download the image because i am calling it like this in the cellForRowAtIndexPath.
    UIImage *cellimage = [self newUIImageWithURLString:indexPath.row];
    cell.image = cellimage;
    I am looking into using a background process for the actual downloading so as not to interfere with the table operations. have you any thoughts on the best way to this?

  • Using XML with Flex - Best Practice Question

    Hi
    I am using an XML file as a dataProvider for my Flex
    application.
    My application is quite large and is being fed a lot of data
    – therefore the XML file that I am using is also quite large.
    I have read some tutorials and looked thorough some online
    examples and am just after a little advice. My application is
    working, but I am not sure if I have gone about setting and using
    my data provider in the best possible (most efficient) way.
    I am basically after some advice as to weather I am going
    about using (accessing) my XML and populating my Flex application
    is the best / most efficient way???
    My application consists of the main application (MXML) file
    and also additional AS files / components.
    I am setting up my connection to my XML file within my main
    application file using HTTPService :
    <mx:HTTPService
    id="myResults"
    url="
    http://localhost/myFlexDataProvider.xml"
    resultFormat="e4x"
    result="myResultHandler(event)" />
    and handling my results with the following function:
    public function myResultHandler(event:ResultEvent):void
    myDataFeed = event.result as XML;
    within my application I am setting my variable values by
    firstly delacring them:
    public var fName:String;
    public var lName:String;
    public var postCode:string;
    public var telNum:int;
    And then, giving them a value by “drilling” into
    the XML, E;g:
    fName = myDataFeed.employeeDetails.contactDetails.firstName;
    lName = myDataFeed.employeeDetails.contactDetails.lastName;
    postCode =
    myDataFeed.employeeDetails.contactDetails.address.postcode;
    telNum = myDataFeed.employeeDetails.contactDetails.postcode;
    etc…
    Therefore, for any of my external (components in a different
    AS file) components, I am therefore referencing there values using
    Application:
    import mx.core.Application;
    And setting the values / variables within the AS components
    as follows:
    public var fName:String;
    public var lName:String;
    fName =
    Application.application.myDataFeed.employeeDetails.contactDetails.firstName;
    lName =
    Application.application.myDataFeed.employeeDetails.contactDetails.lastName;
    As mentioned this method seems to work, however, is it the
    best way to do it??? :
    - Connect to my XML file
    - Set up my application variables
    - Give my variables values from the XML file ……
    Bearing in mind that in this particular application there are
    many variable that need to be set and there for a lot of lines of
    code just setting up and assigning variables values from my XML
    file.
    Could someone Please advise me on this one????
    Thanks a lot,
    Jon.

    I don't see any problem with that.
    Your alternatives are to skip the instance variables and
    query the XML directly. If you use the values in a lot of places,
    then the Variables will be easier to use and maintain.
    Also, instead of instance variables, you colld put the values
    in an "associative array" (object/hashtable), or in a dictionary.
    Tracy

  • Saving zip code data with PHP - best practices

    I have built my client an application that analyzes uploaded
    zip codes for
    matches with a standard set of zips. These uploaded zips can
    be one at a
    time, or a copy/paste from an XLS file (just 5 digit ZIPs).
    They are now asking me to save these uploaded zips, and I am
    wondering what
    would be the best way to do that. My two obvious choices are
    1. Write them to an external text file with a
    programmatically generated
    name, and enter the name in the database, keyed to the user.
    2. Write the zips themselves into a glob field in the
    database.
    I'm inclined to the former, since I don't think there would
    ever need to be
    any further manipulation of these zip codes, but what do you
    think? Are
    there other choices I may have overlooked?
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================

    Dang - sorry. Wrong forum.
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================
    "Murray *ACE*" <[email protected]> wrote
    in message
    news:fvfi5j$ig7$[email protected]..
    >I have built my client an application that analyzes
    uploaded zip codes for
    >matches with a standard set of zips. These uploaded zips
    can be one at a
    >time, or a copy/paste from an XLS file (just 5 digit
    ZIPs).
    >
    > They are now asking me to save these uploaded zips, and
    I am wondering
    > what would be the best way to do that. My two obvious
    choices are -
    >
    > 1. Write them to an external text file with a
    programmatically generated
    > name, and enter the name in the database, keyed to the
    user.
    > 2. Write the zips themselves into a glob field in the
    database.
    >
    > I'm inclined to the former, since I don't think there
    would ever need to
    > be any further manipulation of these zip codes, but what
    do you think?
    > Are there other choices I may have overlooked?
    >
    > --
    > Murray --- ICQ 71997575
    > Adobe Community Expert
    > (If you *MUST* email me, don't LAUGH when you do so!)
    > ==================
    >
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    >
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    > ==================
    >
    >

  • UITableView with Images Best Practices

    I have a UITableView with each row having an image coming from a remote URL. There are a great many strategies for dealing with and caching the images. I've narrowed it down to two:
    1. When the table stops scrolling let all the visible cells know they need to grab their images. Fire off a background thread to get the images then cache them in memory.
    2. Same as above, except write the images to disk.
    Has anyone played with these methods to find the breakpoint for when keeping the images in memory is too much of a burden?

    I have been trying to do either.
    right now i have the images download when the cell is created and then stored into an NSMutable Array. the array is initially populated with a NSString value of the url to the image. I then test to see if the object at the current TableView index is a UIIimage, If not i download the image and replace the existing NSString with the UIImage in the array.
    -(UIImage*) newUIImageWithURLString:(int)urlString
    if (![[imgarr objectAtIndex:urlString] isKindOfClass: [UIImage class]])
    NSLog(@"image not there");
    UIImage *img2get = [[UIImage alloc] initWithData:[NSData dataWithContentsOfURL:[NSURL URLWithString:[imgarr objectAtIndex:urlString]]]];
    [imgarr replaceObjectAtIndex:urlString withObject:img2get];
    [img2get release];
    return [imgarr objectAtIndex:urlString];
    Works fairly well but does stall the scrolling when i download the image because i am calling it like this in the cellForRowAtIndexPath.
    UIImage *cellimage = [self newUIImageWithURLString:indexPath.row];
    cell.image = cellimage;
    I am looking into using a background process for the actual downloading so as not to interfere with the table operations. have you any thoughts on the best way to this?

  • UC on UCS RAID with TRCs best practices

    Hi,
    We bought UCSs servers to do UC on UCS. The servers are TRC#1 240M3S, hence with 16x300GB drives.
    I am following this guide to create the RAID (I actually thought they would come pre-configured but it does not seem to be the case):
    http://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucm/virtual/CUCM_BK_CF3D71B4_00_cucm_virtual_servers.pdf
    When it comes to setting up the RAID for the C240-M3, it is mentioned that I should create 2 RAID 5 arrays of 8 disks each, 1 per SAS Adapter.
    The thing is that on my servers I apparently only have 1 adapter that is able to control all the 16 Disks. It might be a new card that was not available at the time the guide was written. 
    So my question is: Should I still configure two RAID 5 volumes although I only have one SAS adapter or can I use a single RAID 5 (or other) volume? 
    If I stick to two volumes, are there recommendations for example to put some UC apps on one volume and some others on another volume? Those servers will be used for two clusters, so I was thinking using one datastore per cluster.
    Thanks in advance for your thoughts
    Aurelien

    Define "Best"?
    It really comes down to what your requirements are, i.e what applications are you going to use, are you going to use SAN, how many applications, what is your budget, etc, etc.
    Here is a link to Cisco's UC on UCS wiki:
    http://docwiki.cisco.com/wiki/UC_Virtualization_Supported_Hardware
    HTH,
    Chris

  • Need advise for best practice when using Toplink with external transaction

    Hello;
    Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
    Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
    Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
    Any help will be most appreciated.
    Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
    The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    ClientSession clientSession = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    clientSession.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    Session session = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    if (TransactionController.getInstance().useExternalTransactionControl())
         session = session.getActiveUnitOfWork();
         readObjectQuery.conformResultsInUnitOfWork(); }
    obj = (SomeObject)session.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    if (TransactionController.getInstance().notUseExternalTransactionControl())
         session.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
    public getUnitOfWork(ILoginUser aUser)
    throws DataAccessException
         ClientSession clientSession = getClientSession(aUser);
         UnitOfWork uow = null;
         if (TransactionController.getInstance().useExternalTransactionControl())
              uow = clientSession.getActiveUnitOfWork();
              uow.setShouldNewObjectsBeCached(true);     }
         else
              uow = clientSession.acquireUnitOfWork();
         return uow;
    }

    As it generally is with this sort of question there is no exact answer.
    The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
    As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Best practices in wireless configuration?

    Hi,
    Is there a best practice document that shows 3500 AP with 5508 controllers? The questions I have are below.
    1. Do I configure each AP to non overlapping neighbor channels(1,6,11 for 2.4GHz) or leave that to controller to decide? Does controller change the channel of an AP when it sees congestion on a specific frequency?
    2. For 5 GHz is it good idea to bond the channels? What frequency to use for neighboring APs? OR again, leave it to controller to shift as needed?
    3. For security what's best practices? 802.1x or different?
    Thanks,
    Sm

    1. Do I configure each AP to non overlapping neighbor channels(1,6,11 for 2.4GHz) or leave that to controller to decide?
    Let the controller(s) decide.  By default the Dynamic Channel Assignment (DCA) verifies the channel for interferrence every 600 seconds.  Because you have 3500 then make sure you enable Event Driven RRM (Radio Resource Management) on both channels.
    Does controller change the channel of an AP when it sees congestion on a specific frequency?
    The controllers will not change the channel when it sees congestion.  The controller will change the channel if it sees interference on the same channel.  The CleanAir will change the channel when it sees interference from non-AP interference like Bluetooth, Microwave ovens, cordless phones, etc.
    2. For 5 GHz is it good idea to bond the channels?
    Sure.
    What frequency to use for neighboring APs? OR again, leave it to controller to shift as needed?
    Leave this option in default.
    3. For security what's best practices? 802.1x or different?Sure.

  • SAP CRM V1.2007 Best Practice

    Hello,
    we are preparing the installation of a new CRM 2007 system and we want to have a good demo system.
    We are considering too options;
    . SAP CRM IDES
    . SAP CRM Best Practice
    knwoing that we have an ERP 6.0 IDES system we want to connect to.
    The Best Practice seems to have a lot of preconfigured scenarios that will not be available in the IDES system (known as the "SAP all in one").
    How can we start the automatic installation of the scenarios (with solution builder) connecting to the ERP IDES system?
    Reading the BP Quick guide, it is mentioned that in order to have the full BP installation we need to have a ERP system with another Best Practice package.
    Will the pre customized IDES data in ERP be recognized in CRM?
    In other words, is the IDES master data, transactional data and organizational structure the same as the Best Practice package one?
    Thanks a lot in advance for your help
    Benoit

    Thanks a lot for your answer Padma Guda,
    The difficult bit in this evaluation is that we don't know exactly the
    difference between the IDES and the Best Practice. That is to say,
    what is the advantage to have a CRM Best Practice connected to an ERP
    IDES as opposed to a CRM IDES system connected to a ERP IDES system?
    As I mentioned, we already have an ERP IDES installed as back end system.
    I believe that if we decide to use the ERP IDES as the ERP back end, we will loose some of the advantage of having an ERP Best practice connected to an CRM best practice e.g. Sales area already mapped and known by the CRM system, ERP master data already available in CRM, transactional data already mapped, pricing data already mapped etc.
    Is that righ? Or do we have to do an initial load of ERP in all cases?

  • Best Practice - Hardware requirements for exchange test environment

    Hi Experts,
    I'm new to exchange and I want to have a test environment for learning, testing ,batches and updates.
    In our environment we have co-existence 2010 and 2013 and I need to have a close scenario on my test environment.
    I was thinking of having an isolated (not domain joined) high end workstation laptop with (quad core i7, 32GB RAM, 1T SSD) to implement the environment on it, but the management refused and replied "do it on one of the free servers within the live production
    environment at the Data Center"... !
    I'm afraid of doing so not to corrupt the production environment with any mistake by my configuration "I'm not that exchange expert who could revert back if something wrong happened".
    Is there a documented Microsoft recommendation on how to do it and where to do so to be able to send it to them ??
    OR/ Could someone help with the best practice on where to have my test environment and how to set it up??
    Many Thanks
    Mohamed Ibrahim

    I think this may be useful:
    It's their official test lab set up guide.
    http://social.technet.microsoft.com/wiki/contents/articles/15392.test-lab-guide-install-exchange-server-2013.aspx
    Also, your spec should be fine as long as you run the VMs within their means.

  • Design Patterns/Best Practices etc...

    fellow WLI gurus,
    I am looking for design patterns/best practices especially in EAI / WLI.
    Books ? Links ?
    With patterns/best practices I mean f.i.
    * When to use asynchronous/synchronous application view calls
    * where to do validation (if your connecting 2 EIS, both EIS, only in WLI,
    * what if an EIS is unavailable? How to handle this in your workflow?
    * performance issues
    Anyone want to share his/her thoughts on this ?
    Kris

              Hi.
              I recently bought WROX Press book Professional J2EE EAI, which discusses Enterprise
              Integration. Maybe not on a Design Pattern-level (if there is one), but it gave
              me a good overview and helped me make some desig decisions. I´m not sure if its
              technical enough for those used to such decisions, but it proved useful to me.
              http://www.wrox.com/ACON11.asp?WROXEMPTOKEN=87620ZUwNF3Eaw3YLdhXRpuVzK&ISBN=186100544X
              HTH
              Oskar
              

  • IPhoto 11: Best practice to export a slideshow

    Hi all,
    after reading some posts on this matter I'm almost more confused than beginning so I hope someone can help me with "latest" best practices.
    Hi have iPhoto 11 and normally I'm making slideshows for watching on HD TV.
    For DVD burning I'm using  Toast 11 (instead of iDVD).
    Then, considering that converting in DVD I'll loose in quality, which are the best export parameters I should use ?
    I've been suggested to use (I may have transalted something wrongly as I'm using the Italian version):
    Export -> Personal Setting
    Keeping "Export for Quick Time usage" -> Options
    Under Video tab:
    Compression: H.264
    Speed Frequency: 25 (for PAL) with Automatic below
    Compressor Quality: Max
    Codify: Quick (single)
    Under Dimension tab:
    Dimension: 1920x1080 HD
    Thanks in advance for your advice here.
    Regards
    Giancarlo

    OK, so let me repeat just to be sure I understood well.
    Could you confirm that all parameters I'm using are OK but dimensions?
    About dimensions:
    a. use 720x526 when I'm burning a standard DVD for 4:3
    b. use 1024x576 when I'm burning a standard DVD for 16:9 .... is this feasible or just a. is available ?
    c. use 1280x720 or 1920x1080 when I'm burning a BR (for HD watching)
    Am I right ?
    Regards
    Giancarlo

  • Ibook to desktop syncing best practices

    trying to keep my ibook in sync with my g5 desktop. Client projects in addition to the entourage data files, etc.. I've come across numerous scenarios and recommendations. Anyone with any best practice suggestions, ie: software, syncing scenarios, automation, etc., would be greatly appreciated.

    Hello Hugh
    The settings that you are looking for are in Itunes. you can choose to sync only unlistened podcasts.
    If you go to the podcast section in itunes, there is a filed that says keep and you can choose from the following options:
    All episodes
    All unplayed episodes
    Most Recent episode
    Then when you connect your ipod you will then see an optionto only sync the unlistened podcasts and you should be all set.

Maybe you are looking for