Hugemem Kernel with Very Large Memory

Hi All,
OS:-Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
Oracle Version:-Release 10.2.0.1.0 (32 bit)
SGA:-2400M
PGA:-1024M
Kernel:-2.6.9-78.ELhugemem
MemTotal: 8307568 kB
We were currently have 8GB of RAM but we are able to use 3.4G. So we have upgrade the Kernel to Hugemem Kernel
in order to use the remaining memory.
Please can anybody share the require step need to accomplish the use to remaining memory.
Thanks
Jamsher

Currently i am reading the following document
http://download.oracle.com/docs/cd/B28359_01/server.111/b32009/appi_vlm.htm#CACFBJGF
But i have a doubt
mount -t shm shmfs -o size=20g /dev/shm
In this step how much memory i need to allocate as i have 8G
Please suggest.
Regards
Jamsher

Similar Messages

  • ORA-00385: cannot enable Very Large Memory with new buffer cache 11.2.0.2

    [oracle@bnl11237dat01][DWH11]$ sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.2.0 Production on Mon Jun 20 09:19:49 2011
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL> startup mount pfile=/u01/app/oracle/product/11.2.0/dbhome_1/dbs//initDWH11.ora
    ORA-00385: cannot enable Very Large Memory with new buffer cache parameters
    DWH12.__large_pool_size=16777216
    DWH11.__large_pool_size=16777216
    DWH11.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    DWH12.__pga_aggregate_target=2902458368
    DWH11.__pga_aggregate_target=2902458368
    DWH12.__sga_target=4328521728
    DWH11.__sga_target=4328521728
    DWH12.__shared_io_pool_size=0
    DWH11.__shared_io_pool_size=0
    DWH12.__shared_pool_size=956301312
    DWH11.__shared_pool_size=956301312
    DWH12.__streams_pool_size=0
    DWH11.__streams_pool_size=134217728
    #*._realfree_heap_pagesize_hint=262144
    #*._use_realfree_heap=TRUE
    *.audit_file_dest='/u01/app/oracle/admin/DWH/adump'
    *.audit_trail='db'
    *.cluster_database=true
    *.compatible='11.2.0.0.0'
    *.control_files='/dborafiles/mdm_bn/dwh/oradata01/DWH/control01.ctl','/dborafiles/mdm_bn/dwh/orareco/DWH/control02.ctl'
    *.db_block_size=8192
    *.db_domain=''
    *.db_name='DWH'
    *.db_recovery_file_dest='/dborafiles/mdm_bn/dwh/orareco'
    *.db_recovery_file_dest_size=7373586432
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=DWH1XDB)'
    DWH12.instance_number=2
    DWH11.instance_number=1
    DWH11.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat01-vip)(PORT=1521))))'
    DWH12.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat02-vip)(PORT=1521))))'
    *.log_archive_dest_1='LOCATION=/dborafiles/mdm_bn/dwh/oraarch'
    *.log_archive_format='DWH_%t_%s_%r.arc'
    #*.memory_max_target=7226785792
    *.memory_target=7226785792
    *.open_cursors=1000
    *.processes=500
    *.remote_listener='LISTENERS_SCAN'
    *.remote_login_passwordfile='exclusive'
    *.sessions=555
    DWH12.thread=2
    DWH11.thread=1
    DWH12.undo_tablespace='UNDOTBS2'
    DWH11.undo_tablespace='UNDOTBS1'
    SPFILE='/dborafiles/mdm_bn/dwh/oradata01/DWH/spfileDWH1.ora' # line added by Agent
    [oracle@bnl11237dat01][DWH11]$ cat /etc/sysctl.conf
    # Kernel sysctl configuration file for Red Hat Linux
    # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
    # sysctl.conf(5) for more details.
    # Controls IP packet forwarding
    net.ipv4.ip_forward = 0
    # Controls source route verification
    net.ipv4.conf.default.rp_filter = 1
    # Do not accept source routing
    net.ipv4.conf.default.accept_source_route = 0
    # Controls the System Request debugging functionality of the kernel
    kernel.sysrq = 0
    # Controls whether core dumps will append the PID to the core filename
    # Useful for debugging multi-threaded applications
    kernel.core_uses_pid = 1
    # Controls the use of TCP syncookies
    net.ipv4.tcp_syncookies = 1
    # Controls the maximum size of a message, in bytes
    kernel.msgmnb = 65536
    # Controls the default maxmimum size of a mesage queue
    kernel.msgmax = 65536
    # Controls the maximum shared segment size, in bytes
    kernel.shmmax = 68719476736
    # Controls the maximum number of shared memory segments, in pages
    #kernel.shmall = 4294967296
    kernel.shmall = 8250344
    # Oracle kernel parameters
    fs.aio-max-nr = 1048576
    fs.file-max = 6815744
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    kernel.shmmax = 536870912
    net.ipv4.ip_local_port_range = 9000 65500
    net.core.rmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 1048586
    net.ipv4.tcp_wmem = 262144 262144 262144
    net.ipv4.tcp_rmem = 4194304 4194304 4194304
    Please can I know how to resolve this error.

    CAUSE: User specified one or more of { db_cache_size , db_recycle_cache_size, db_keep_cache_size, db_nk_cache_size (where n is one of 2,4,8,16,32) } AND use_indirect_data_buffers is set to TRUE. This is illegal.
    ACTION: Very Large Memory can only be enabled with the old (pre-Oracle_8.2) parameters

  • JRockit for applications with very large heaps

    I am using JRockit for an application that acts an in memory database storing a large amount of memory in RAM (50GB). Out of the box we got about a 25% performance increase as compared to the hotspot JVM (great work guys). Once the server starts up almost all of the objects will be stored in the old generation and a smaller number will be stored in the nursery. The operation that we are trying to optimize on needs to visit basically every object in RAM and we want to optimize for throughput (total time to run this operation not worrying about GC pauses). Currently we are using hugePages, -XXaggressive and -XX:+UseCallProfiling. We are giving the application 50GB of ram for both the max and min. I tried adjusting the TLA size to be larger which seemed to degrade performance. I also tried a few other GC schemes including singlepar which also had negative effects (currently using the default which optimizes for throughput).
    I used the JRMC to profile the operation and here were the results that I thought were interesting:
    liveset 30%
    heap fragmentation 2.5%
    GC Pause time average 600ms
    GC Pause time max 2.5 sec
    It had to do 4 young generation collects which were very fast and then 2 old generation collects which were each about 2.5s (the entire operation takes 45s)
    For the long old generation collects about 50% of the time was spent in mark and 50% in sweep. When you get down to the sub-level 2 1.3 seconds were spent in objects and 1.1 seconds in external compaction
    Heap usage: Although 50GB is committed it is fluctuating between 32GB and 20GB of heap usage. To give you an idea of what is stored in the heap about 50% of the heap is char[] and another 20% are int[] and long[].
    My question is are there any other flags that I could try that might help improve performance or is there anything I should be looking at closer in JRMC to help tune this application. Are there any specific tips for applications with large heaps? We can also assume that memory could be doubled or even tripled if that would improve performance but we noticed that larger heaps did not always improve performance.
    Thanks in advance for any help you can provide.

    Any suggestions for using JRockit with very large heaps?

  • Ipod touch 4th, running OS5, boots up with very large icons, impossible to navigate, how to get back standard sized homepage?

    Ipod touch 4th, running OS5, boots up with very large icons, impossible to navigate, need to return to standard sized homescreen?

    Triple click the Home button and then go to Settings>General>Accessibility and turn Zoom off. If problems see:
    iPhone: Configuring accessibility features (including VoiceOver and Zoom)

  • Best data Structor for dealing with very large CSV files

    hi im writeing an object that stores data from a very large CSV file. The idea been that you initlize the object with the CSV file, then it has lots of methods to make manipulating and working with the CSV file simpler. Operations like copy colum, eliminate rows, perform some equations on all values in a certain colum, etc. Also a method for prining back to a file.
    however the CSV files will probly be in the 10mb range maby larger so simply loading into an array isn't posable. as it produces a outofmemory error.
    does anyone have a data structor they could recomend that can store the large amounts of data require and are easly writeable. i've currently been useing a randomaccessfile but it is aquard to write to as well as needing an external file which would need to been cleaned up after the object is removed (something very hard to guarentee occurs).
    any suggestions would be greatly apprechiated.
    Message was edited by:
    ninjarob

    How much internal storage ("RAM") is in the computer where your program should run? I think I have 640 Mb in mine, and I can't believe loading 10 Mb of data would be prohibitive, not even if the size doubles when the data comes into Java variables.
    If the data size turns out to be prohibitive of loading into memory, how about a relational database?
    Another thing you may want to consider is more object-oriented (in the sense of domain-oriented) analysis and design. If the data is concerned with real-life things (persons, projects, monsters, whatever), row and column operations may be fine for now, but future requirements could easily make you prefer something else (for example, a requirement to sort projects by budget or monsters by proximity to the hero).

  • Any suggestions on calculating with very large or small numbers?

    It seems that double values are about 17 decimal places (10e17) in precision.
    Is there a way in iPhone calculations to get more precision for very large and small numbers, like 10e80 and so forth? I know that's more than the entire number of atoms in the universe, but still.
    I tried "long double" but that didn't seem to make any difference.
    Just a limitation?
    Thanks,
    doug

    Hmmm... maybe I was just having a problem with my formatted string then?
    I was using the NSString %g format, which is supposed to print in exponential notation if the number is greater than 1e4 or less than 1e-4, or something like that.
    But I was not getting anything greater exponents than 1e17 and then I was apparently getting overflows because the number were having negative mantissas.
    All the variables involved were double...
    How did you "look at" z?
    Thanks,
    doug

  • Problem in compilation with very large number of method parameters

    I have java file which I created using WSDL2Java. Since the actual WSDL has a complex type with a large number of elements(around 600) in it, Consequently the resulting java file(from WSDL2Java) has a method that takes 600 parameters of various types. When I try to compile it using javac at command prompt, it says "Too many parameters" and doesn't compile. The same is compiling successfully using JBuilder X . The only way I could compile successfully at command prompt is by reducing the number of parameters to around 250 but unfortunately that it's not a workable solution. Does Sun specify any upper bound on number of parameters that can be passed to a method?

    ... a method that takes 600 parameters ...Not compatible with the spec, see Method Descriptors.
    When I try to compile it using javac at
    command prompt, it says "Too many parameters" and
    doesn't compile.As it should.
    The same is compiling successfully using JBuilder X .If JBuilder produces a class file, that class file may very well be invalid.
    The only way I could compile
    successfully at command prompt is by reducing the
    number of parameters to around 250Which is what the spec says.
    but unfortunately that it's not a workable solution.Pass an array of objects - an array is just one object.
    Does Sun specify
    any upper bound on number of parameters that can be
    passed to a method?Yes.

  • Need help with "Very large content bounds" error...

    Hey guys,
    I've been having an issue with Adobe Muse [V7.0, Build 314, CL 778523] - one of the widgets I tried from the Exchange library seemed to bug out and created a large content box.
    This resulted in this error:
    Assert: "Very large content bounds (W:532767.1999999997 H:147446.49743999972) detected in BoxUtils::childBounds"
    Does anyone know how I could fix this issue?

    Hi there -
    Your file has been repaired and emailed back to you. Please let us know if you run into any other issues.
    Thanks!
    -Sam

  • Working with VERY LARGE tables - is it possible to bypass row counting?

    Hello!
    For working with large result sets ADF provides the `Range Paging` mechanism for views, described in the 27.1.5 part of the Developer’s Guide For Forms/4GL Developers.
    It works well, but as a common mode it counts total row count to allow paging. In some cases query `select count(1) from (SELECT ...)...` can take very, very long time.
    But if a view object doesn't know row count (for example we can override getEstimatedRowCount() method ), paging controls doesn't appear in user interface.
    Meanwhile I suggest that it's possible to display two paging links - Prev and Next, without knowing row count. Is it a way to do it?
    Thank in advance,
    Ilya Rodionov.

    Hi Ilya,
    while you wait for Franks to dig up the right sample you can read this thread:
    Re: ADF BC: Performance issue with getEstimatedRowCount (ER?)
    There we discuss the exact issue.
    Timo

  • Bridge CS4 with very large image archves

    I have just catalogued 420,000 images from 3TB of my photographic collection. For those interested, the master cache files became quite large and took about 9 days of continuous processing:
    cache size: 140 gb
    file count: 991,000
    folder count: 3000
    All cache files were also exported to the disk directories.
    My primary intent was to use the exported cache files as a "quick" browsing mechanism with bridge. Of course, "quick" is a rather optimistic word, however is is very significantly faster than having bridge rebuild caches as needed.
    I am now trying to decide if it is worth keeping the master bridge cache because of the limitations of the bridge implementation which is not very flexible as to where and when the master cache adds new temporary cache entries.
    Any suggestions as to the value of keeping the master cache would be appreciated. I don't really need key word or other rating systems since I presently use a simple external data base for this type for image location.
    I am also interested in knowing if the "500,000" entry cache limitation is real - or if more than 500,000 images can be stored in the master cache since I will be exceeding this image count next year.

    I have a bridge 5 cache system with 600,000 images over 8 TB of networded disk.  I too use this to "speed up" the browsing process and rely primarily on key word processing to group images.  The metadata indexing is, for practical purposes, totally useless (it never ceases to amaze me about why Adobe things it useful for me to know how many images were taken with a lens focal length of 73mm - or some other equally useless statistic).  The only thing I can think of that is are serious missing keyword indexing feature is the ability to have a key word associated with a directory.  For example, I have shot many dozens of dance, theatre, music and sports productions - it would be much more useful to cataloge a directory that has the key words "Theatre" and "Romeo and Juliette" than attempt to key-word each individual image.   It is, of course, possible to work around the restrictions but that is very unclean and certainly less than desireable.   Key-wording a project (i.e. a directory) is a totally different kettle of fish than key-wording an image.  I also find the concept of the "collection" is very useful and well implemented.
    I do maintain a complete cache build of my system.  It is spread over two master caches, one for the first 400,000 images and a second for the next 400,000 (I want to stay within the 500,000 cache size limit - it is probabley associated with the MYSQL component of Bridge and I think may have problems if you exceed the limit by a substantial amount.  With Bridge on CS3, when the limit was exceeded , the cache system self-distructed and I had to rebuild).
    The only thing I can think of (and that seems to be part of Adobe's design) is that Bridge will rebuild the master cache for a working directory "for no apparent reason" such as when certain (unknown) changes are made to ACR,   Other automatic rebuilds have been reported by others however Adobe does not comment upon when or what casuses a rebuild.  Of course, this has serious impact upon getting work done - it is a bloody pain to have bridge suddenly process 1500 thumbs and preview extracts simply to keep the master cache completely and perfectly synchronized (in terms of image quality) with what might be displayed if you happen if you want to load a raw image into photoshop.  This strategy is IMHO completely out of step with how (at least I) use the browsing features of bridge.
    It may be of concern that Adobe may, for design reasons, change the format of the directory cache files and you will have to completely rebuild all master and directory caches yet again - which is a real problem if you have many hundreds of thousands of images.  This happened when the cache system changed from CS3 to CS4 - and Adobe did not provide conversion programme to migrate the old to new format.  This significantly adds to the rebuild time since each raw image must be completely reprocessed.  My current rebuild of the master cache has taken over two elapsed weeks of contunuous running.
    It would be nice if Adobe would allow some control over what is recorded in the master cache - for example, "do you wish meta data to be indexed".
    (( as an aside, adobe does not comment upon why, when using Bridge to import images from a CF card, results in the building a .xmp file with nothing but meta data for each raw file.  I am at a loss to speculate what really useful thing results other than maybe speeding up the processing of the (IMHO useless) aspects of meta data ))
    To answer your quiestion, I do think the master cache is worth keeping - and we can pray that Adobe puts more though process into why the master cache exists and who uses the present type of information indexed within the cache.

  • Very large memory size in process window

    I am new to the Mac OS X and in Windows, win explorer never takes any more than 100mb of memory in my processes window, why is it that when i am only surfing one site or no sites but safari is running on the dock, that my memory size is >256mb?! That is just a little to much memory for one program to be taking up considering I only have 512mb.

    Hello Applesr4eva:
    Welcome to Apple discussions.
    This is a very good knowledge base article that discusses OS X memory utilization:
    http://docs.info.apple.com/article.html?artnum=107918
    Don't compare Windows with OS X. OS X uses sophisticated algorithms to manage both real and virtual memory resources. My advice to you would be to forget it unless you are having performance problems. I am running a G4 with 512 MB and it does not breathe hard unless I use processor intensive applications (rare for me in my home computing environment).
    Barry

  • Since getting version 22 (release update channel) I had a few days with very large text; today, Firefox will not start. How to get it running again???

    I've searched for an .exe file in Firefox program files, but there doesn't seem to be one anywhere. I'd like to uninstall, download a new program and reinstall, but I'd rather not lose bookmarks and other settings.
    Running Windows 7 on a Sony Vaio laptop. Any suggestions? Thanks in advance.

    Certain Firefox problems can be solved by performing a ''Clean reinstall''. This means you remove Firefox program files and then reinstall Firefox, you WILL NOT lose any bookmarks, history, and settings. Please follow these steps:
    '''Note:''' You might want to print these steps or view them in another browser.
    #Download the latest Desktop version of Firefox from http://www.mozilla.org and save the setup file to your computer.
    #After the download finishes, close all Firefox windows (click Exit from the Firefox or File menu).
    #Delete the Firefox installation folder, which is located in one of these locations, by default:
    #*'''Windows:'''
    #**C:\Program Files\Mozilla Firefox
    #**C:\Program Files (x86)\Mozilla Firefox
    #*'''Mac:''' Delete Firefox from the Applications folder.
    #*'''Linux:''' If you installed Firefox with the distro-based package manager, you should use the same way to uninstall it - see [[Installing Firefox on Linux]]. If you downloaded and installed the binary package from the [http://www.mozilla.org/firefox#desktop Firefox download page], simply remove the folder ''firefox'' in your home directory.
    #Now, go ahead and reinstall Firefox:
    ##Double-click the downloaded installation file and go through the steps of the installation wizard.
    ##Once the wizard is finished, choose to directly open Firefox after clicking the Finish button.
    Please report back to see if this helped you!

  • Pivot table with very large number of columns

    Hello,
    here is the situation:
    One table that contains raw data; from this table I feed one with extract information (3 fields); I have to turn the content in a pivot table
    Ro --- Co --- Va
    A A 1
    A B 1
    A C 2
    B A 11
    Turned in
    A B C...
    A 1 1 2
    B 11 null null
    To do this I do a query like:
    select r, sum(decode(c,'A',Va) COLA, sum(decode(c,'B',Va) COLB , sum(decode(c,'C',Va) COLC,.... sum(decode(c,'XYZ',Va) COLXYZ from table group by r
    The statement is generated by a script (cfmx) and it works until I reach a query that try to have 672 values for c; which means 672 columns...
    Oracle doesn't like that: ORA-01467: sort key too long
    I like this way has it is getting the result fast.
    I have tried different solution a the CFMX level with for that specific query, I got timeout (query table with loop on co within loop on ro)
    Is there any work around?
    I am using Oracle 9i.
    Tahnk you!

    insert into extracted_data select c, r, v, p from full_data where <specific_clause>
    The values for C are from a query: select disctinct c from extracted_data
    and it is the same for R
    R and C are varchar2(3999)
    I suppose that I can split on the first letter of the C column as:
    SELECT r, low.cola, low.colb, . . ., low.colm,
    high.coln, high.colo, . . ., high.colz
    FROM (SELECT r, SUM(DECODE(c, 'A', va)) cola, . . .
    SUM(DECODE(c, 'M', va)) colm
    FROM table
    WHERE c like 'A%'
    GROUP BY r) Alpha_A,
    (SELECT r, SUM(DECODE(c, 'N', va)) coln, . . .
    SUM(DECODE(c, 'Z', va)) colz
    FROM table
    WHERE c like 'B%'
    GROUP BY r) Alpha_B,
    (SELECT r, SUM(DECODE(c, 'N', va)) coln, . . .
    SUM(DECODE(c, 'Z', va)) colz
    FROM table
    WHERE c like 'C%'
    GROUP BY r) Alpha_C
    (SELECT r, SUM(DECODE(c, 'zN', va)) coln, . . .
    SUM(DECODE(c, 'zZ', va)) colz
    FROM table
    WHERE c like 'Z%'
    GROUP BY r) Alpha_Z
    WHERE alpha_A.r = alpha_B.r and apha_a.r = alpha_C.r ... and alpha_a.r = alpha_z.r
    I will have 27 select statement joined... I have to check if even like that I will not reach the limit within one of the statement select
    "in real life"
    select GRPW.r, GRPW.W0, GRPC.C0, GRPC.C1 from
    (select r, sum(decode(C, 'Wall, unspecified',cases)) W0 from tmp_maqueje where upper(C) like 'W%' group by r) GRPW,
    select r,
    sum(decode(C, 'Ceramic tiles, indoors',cases)) C0,
    sum(decode(C, 'Cement surface, outdoors (Concrete/cement block, see Structural element, A11)',cases)) C1
    from tmp_maqueje where upper(C) like 'C%' group by r) GRPC
    where GRPW.r = GRPC.r
    order by GRPW.r, GRPW.W0, GRPC.C0, GRPC.C1
    Message was edited by:
    maquejp

  • CMR 1-N with very large N. Can it be done?

    Hello,
    Say I have a Customer bean with a 1-N relationship with CustomerHistory implemented using CMR, then how do I practically add a CustomerHistory entry?
    I may have 10000 history records for each customer record. When I need to add a new history record all the examples I have seen would have me do:
    collection = getHistory();
    collection.add(newHistory);
    setHistory(collection);That seems a bit ridiculous to me - I have to retrieve 10000 records just so I can add one!? Am I missing something - is there a more efficient way I should be doing this?
    Thanks,
    Paul

    Having run a few tests it appears that the getHistory() is pretty quick [due to entity caching?], and I should not be doing the setHistory() as that just causes references to all existing history records to be dropped and replaced with those in the collection passed as a parameter to setHistory(). The new history record is added by the container's implementation of collection.add().

  • Capturing GL balances snapshot with very large movement transaction

    Dear Guru,
    I got a design question to ask/consult. Really appreciate if you could share and enlighten my analysis.
    Currently, my env has daily posting of 500k line item capture in BSEG. Yearly, we have estimated 110 millions line items. Daily, when I need to extract and run the GL balance Report, it takes 7 hours to complete the run. The way the customised program coded is Opening Balance + Movement day + Adj entry.
    Any idea how to improve the performance when processing such a huge records.
    Hope to hear from you with many thanks.
    Regards
    Andrew

    Some of these notes would be an input to your ABAPer. You should again involved ABAPer to enhance the performance of the report, as this is not a standard report.
    Note 1135916 - Line items: Help for analysis for long runtime
    Note 989776 - Performance improvement of FBL3N
    Note 1330434 - FBL5N: Performance improvement by changed selection

Maybe you are looking for

  • Entire flash movie not playing fully in all browsers

    Hey there, I launched a flash site this morning and then discovered that the movie isn't playing in it's entirety in all browsers. It plays perfectly in Firefox but stops half way in IE and Safari. I checked to see if I had any stops in the code but

  • Solid sleep light - not pulsing

    Didn't want to just tack this onto another question, and I didn't see any answers when I searched here, so I hope no one minds another thread. What does it mean when the sleep light stays on, but doesn't pulse like it does when it's sleeping? The onl

  • Runtime Error When Using Recovery CDs

    Hi. I had a HDD failure on this notebook. I got a SSD to replace it with (on my own) and got the recovery DVDs from HP. After 3 hours of trying to install the recovery DVDs to the SSD only to have it fail in the last seconds because it couldn't write

  • What do i do if my extension wont work

    hi, i have a windows eight laptop and i have downloaded firefox, i also have chrome as well as IE10. After installing firefox i wanted to add the extension SEO for firefox. i installed and it clearly says its enabled but i see no trace of it along si

  • C7201 & GLC-BX-D/U=

    Hello pros Have anybody tried GLC-BX-D/U= SFP modules with Cisco 7201 router? Does it work?