Max possible memory freq?

Hi I bought my (Lenovo Ideapad B590 (20208) i5 3210M 4GB 1TB USB 3.0 Fingerprint) 2 weeks ago.
With speccy I saw that I have inside a 800MHz 4GB DDR3 memory, I am wondering, why? 800MHz is maximum it can handle? Because I want to buy one more 4GB, but every is above 1300MHz.
I want to know how much MHz can motherboard handle.
Also is any way to register product if i live in slovakia? Webpage is alerting that in slovakia is not available product registration.
I will be happy for full hardware specification. Motherboard informations for example socket type, maximum memory speed in MHz, ect.
ThankYou
Solved!
Go to Solution.

hi erikkubica,
Welcome to the Lenovo Forums.
What you're seing in speccy is the RAM's I/O Bus Clock which is 800Mhz. Since you're using DDR (Double Data Rate), this is equivalent to 1600Mhz. According to this pdf data sheet, the Lenovo B590 supports the following RAM:
8GB max / PC3-12800 1600MHz DDR3
For the product registration, you will need to have a proof of purchase and contact lenovo support to have your product registered and your warranty updated (in some countries, there's no need to register the product as they base the warranty work on the serial number or the proof of purchase).
Best regards,
neokenchi
Did someone help you today? Press the star on the left to thank them with a Kudo!
If you find a post helpful and it answers your question, please mark it as an "Accepted Solution"! This will help the rest of the Community with similar issues identify the verified solution and benefit from it.
Follow @LenovoForums on Twitter!

Similar Messages

  • I have not been able to use iTunes for several months.  Every time I open iTunes, it freezes by computer such that there is about a minutes between each action.  I am running iTunes 11 on Mac OS 10.6.8 and have a computer with maxed out memory.

    I have not been able to use iTunes for several months.  Every time I open iTunes, it freezes by computer such that there is about a minutes between each action.  I am running iTunes 11 on Mac OS 10.6.8 and have a computer with maxed out memory.  Help!  I can't access my iTunes content.

    I have not been able to use iTunes for several months.  Every time I open iTunes, it freezes by computer such that there is about a minutes between each action.  I am running iTunes 11 on Mac OS 10.6.8 and have a computer with maxed out memory.  Help!  I can't access my iTunes content.

  • Impact of project.max-shm-memory configuration in Solaris 10

    Dear All,
    I'm not sure if this an error or purposely configured as it is.
    Current kernel configuration of project.max-shm-memory is *400Gb* while the hardware only have 8 GB RAM, and SGA_max set to 5GB (ORACLE database is 10g).
    Will there be any impact in long run with this configuration based on your experiences?
    project: 1: user.root
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    project.max-shm-memory
    privileged *400GB* - deny -
    system 16.0EB max deny -
    Suggestions and advices are much appreciated.
    Thanks and Best Regards,
    Eric Purwoko

    Hi Helios,
    Thanks!, the recommendation is 4294967295, but my SGA MAX and target is 5 GB. Will it cause problem if I put project.max-shm-memory lower than SGA?
    Thanks for the link too. I guess I better put those configuration in /etc/system too.
    But now wondering what's the best value looking at my SGA max configuraiton
    Best Regards,
    Eric

  • Max-shm-memory Problem - out o memoy. No space on device

    Hi Everyone,
    First time post.  I'm a UNIX SA tying to troubleshhot the problem:  On Solaris 10.
    SQL> startup pfile=inittest1.ora
    ORA-27102: out of memory
    Solaris-AMD64 Error: 28: No space left on device
    SQL>
    u01/app/oracle/admin/dd00lod1/udump/dd00lod1_ora_25782.trc
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ORACLE_HOME = /u01/app/oracle/product/10.2.0.4
    System name:    SunOS
    Node name:      reuxeuux1089
    Release:        5.10
    Version:        Generic_147441-10
    Machine:        i86pc
    Instance name: dd00lod1
    Redo thread mounted by this instance: 0 <none>
    Oracle process number: 0
    Unix process pid: 25782, image: oracle@reuxeuux1089
    skgm warning: ENOSPC creating segment of size 0000000005800000
    fix shm parameters in /etc/system or equivalent
    We have tied modifying the max-shm-memory settings, but no joy !  Please assist if you can.
    Thanks
    Amreek
    prctl -n project.max-shm-memory -i project 100
    project: 100: ORACLE
    NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
    project.max-shm-memory
            privileged       124GB      -   deny                                 -
            system          16.0EB    max   deny

    consider  to Read The Fine Manual; INSTALLATION GUIDE

  • Shminfo_shmmax in /etc/system does not match  project.max-shm-memory

    If I specified 'shminfo_shmmax' in /etc/system and hava the system default in /etc/project(no change is made), the size of 'project.max-shm-memory' is 10 times larger than 'shminfo_shmmax'.
    #more /etc/system // (16MB)
    set shmsys:shminfo_shmmax=16000000
    #prctl -n "project.max-shm-memory" -i project user.root
    => will display like below.
    project: 1: user.root
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    project.max-shm-memory
    privileged 1.49GB - deny -
    system 16.0EB max deny
    1.49GB is 10 times larger than 'SHMMAX'. If I add more entries /etc/system like below, max_shm_memory will become even larger.
    #more /etc/system
    set shmsys:shminfo_shmmax=16000000
    set semsys:seminfo_semmni=2000
    set shmsys:shminfo_shmmni=2000
    set msgsys:msginfo_msgmni=2048
    After I reboot with the above /etc/system and no change /etc/project(all default, no values added)
    # prctl -n "project.max-shm-memory" -i project user.root
    project: 1: user.root
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    project.max-shm-memory
    privileged 29.8GB - deny -
    system 16.0EB max deny -
    Can anyone shed light about this area how to configure SHMAX in /etc/system right?

    We saw similar behavior and opened a case with Sun.
    The problem turns out to be that the (deprecated) /etc/system to (new) project resource limits isn't always one-to-one.
    For example process.max-shm-memory gets set to shmsys:shminfo_shmmax * shmsys:shminfo_shmmni.
    The logic here is that under the /etc/system tunings you might have wanted the maximum number of segments of the maximum size so the system has to be able to handle that. Make sense to some degree. I think Sun updated one of their info docs on the process at the end of our case to make this clearer.

  • How to set kernel parameter max-shm-memory automatically at startup

    Hi,
    We have a 11.1.0.7 Database on Solaris Sparc 10 64-bit server. We have settings of max-shm-memory as below;
    -bash-3.00# prctl -n project.max-shm-memory -i project default
    project: 3: default
    NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
    project.max-shm-memory
            privileged      50.0GB      -   deny                                 -
            system          16.0EB    max   deny                                 -
    -bash-3.00# prctl -n project.max-shm-memory -i project system
    project: 0: system
    NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
    project.max-shm-memory
            privileged      50.0GB      -   deny                                 -
            system          16.0EB    max   deny                                 -
    Whenever we restart the db the second one is lost;
    bash-3.00$ prctl -n project.max-shm-memory -i project default
    project: 3: default
    NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
    project.max-shm-memory
            privileged      50.0GB      -   deny                                 -
            system          16.0EB    max   deny                                 -
    bash-3.00$ prctl -n project.max-shm-memory -i project system
    prctl: system: No controllable process found in task, project, or zone.
    So our sys admin has to configure them again whenever we restart our db. How could I do this automatically at startup without counfiguring again from command prompt?
    Thanks,
    Hatice

    Ok it is clear now. I have one more question.  When I check system I get below error;
    # prctl -n project.max-shm-memory -i project system
    prctl: system: No controllable process found in task, project, or zone.
    Document says; The reason for the message reported above is because there is no active process(es) belong to the project.
    But it is impossible for us because according to our project settings its for root user;
    bash-3.00$ cat /etc/project
    system:0::root::project.max-shm-memory=(priv,53687091200,deny)
    user.root:1::::
    noproject:2::::
    default:3::oracle::project.max-shm-memory=(priv,53687091200,deny)
    group.staff:10::::
    oracle:100::::project.max-shm-memory=(priv,53687091200,deny)
    Is it because I check with oracle user and I don't have sufficient privileges or there is something wrong with it?
    Thanks.

  • Max-shm-memory - definition

    Hi Guys,
    I'm trying to get a clear definition on project.max-shm-memory. Lets say its set to 4GB for user.oracle. Does this mean that the maximum amount of shared memory available to the project is 4GB or that the maximum shared memory segment size created by ANY process in the project can be 4GB (i.e 2 processes could create 2 separate 4GB segments)? I'm pretty sure its the former but I wanted to check..
    Thanks,
    Tony

    Even though SUN says many of the kernel tunables are now obsolete in /etc/sytem, some like shmmax actually will still work if reset with the global zone. The default is 1/4 system memory.

  • /etc/profile: line 28: ulimit: max locked memory: cannot modify limit: Oper

    Hi;
    I writed one sh which is checking tablespace size and its working well ( i used this before for someother client) Now i try to run it on one server and i have this error:
    /etc/profile: line 28: ulimit: max locked memory: cannot modify limit: Operation not permitted in /var/spool/mail/root
    Anyone has idea what is problem?
    Thanks

    Well, check line 28 of /etc/profile, and see what command it is trying to execute. If it's a ulimit -l command, check if the value it is trying to set the limit to is higher than the current ulimit value of the current user. If it is, then that operation is not allowed. You can increase the limit for the current user (at login time) by modifying /etc/security/limits.conf . This is documented in the various guides, notes, whitepapers that talk about installing Oracle database on Linux.

  • Prctl -n project.max-shm-memory -i process $$

    Hi all,
    when i execute the following command "prctl -n project.max-shm-memory -i process $$"
    its output is
    NAME PRIVILEGE VALUE FLAG ACTION RECIPIENT
    project.max-shm-memory
    privileged 5.85TB - deny
    5.85TB while RAM is only 32GB.
    How i can change it for oracle user.

    What does your /etc/project file says?
    Mine is (showing oracle user):
    oracle:100::oracle::process.max-sem-nsems=(priv,300,deny);project.max-sem-ids=(priv,100,deny);project.max-shm-ids=(priv,512,deny);project.max-shm-memory=(priv,8589934592,deny)
    That is 8 GB RAM allowed for oracle use (max-shm-memory).
    Change it using
    projmod -sK "project.max-shm-memory=(priv,8G,deny)" oracleJan

  • Max-shm-memory (Solaris 11.1)

    I wonder if someone can give a definitive answer on this, or point me to somewhere that does?
    If I have a 64GB RAM server running Solaris 11.1 that will run a single Oracle instance, what is the correct setting to make for max-shm-memory?
    Should it be 64GB, or something a bit smaller? Or something a lot smaller?
    I have read the installation documentation, but it gives examples of 2gb, 4gb and so on. They don't seem relevant to a 64GB+ server

    Thank you, but that document doesn't answer my questions.
    Specifically, it states at one point that "project.max-shm-memory=(privileged,51539607552,deny); ... sets a limit of 48GB per shared memory segment" (which is true, as far as I understand it). But it then goes on to say in the next breath that "The project.max-shm-memory limit is the __total__ shared memory size for your project. -- ie maximum total of all your segments.". Which, to my mind, contradicts its first statement.
    The article then also goes on to give an example where "This system has 8GB of memory..." and shows the author setting "projmod -s -K "project.max-shm-memory=(privileged,4GB,deny)" 'user.oracle'"... so are we to deduce that you should set max-shm-memory to 50% of your physically-available RAM? Or not??
    I had actually read this before I posted. It's the same sort of document I see over and over on this subject: lots of examples, some contradictory, but none stating what principles should govern the setting of max-shm-memory, and none stating what the consequences of (for example) allocating 100% of physically available RAM as max-shm-memory would be.
    So thank you for the reference, but my question still stands: can someone provide a definitive answer on what the setting here should be? It's a 64GB server and will run nothing but Oracle database, with a single instance. max-shm-memory should be set to.... what, exactly?

  • Solaris Environment Manager maxs out memory.

    We have a forte system in production, running on two Solaris 2.6 boxes. Each
    has its own environment and they are connected. Our environments are called
    "Data Center" and "Call Center". Our Forte memory settings are fairly
    standard. We start at 20Mb and max out at 64Mb. We also have Forte keepalive
    enabled, with the Forte recommended values. We are running Forte 3.0.L.3 and
    using Oracle 8.0.5 for our database (as if it mattered). Each environment
    has 1 or 2 NT Server boxes in it, and the clients are all NT4 using a model
    node. Our Solaris machines have the patch installed that fixes the problem
    that Peggy Adrian posted about earlier in the year.
    Three times now in the past week, we have suddenly had the environment
    manager for the data center suddenly start chewing up memory. When the data
    center manager gets to about 64Mb, the data center environment manager
    starts to do the same thing. Soon after, both environments die a hideous
    death, and we have go in and start the node managers again.
    Please note that we always export our environments to a ".edf" file and
    bootstrap the environment manager (removing the environment repository)
    whenever we install a new copy of the Forte applications, or whenever our
    node manager dies (which seems far more frequent that I would like).
    Interestingly, the environment repository hadn't grown any after the
    environment manager "runs up the curtain and joins the choir invisibule".
    If anyone can throw some light on this, and point us in the direction to
    look to solve this problem it would be much appreciated.
    Nick.

    Hi,
    @ Sunny: Thanks for response, the referred note was already checked and parameters are in sync as per note.
    @Mohit: SAP wouldn't proceed till create database if Oracle software was not installed. thanks for the response.
    @Markus: Thanks I agree with you, but I have doubt in this area. Isn't it like project.max-shm-memory was new parameter we need to set in local zone rather using shmsys:shminfo_shmmax in /etc/system. Do we need to still maintain this parameter in /etc/system in global zone.
    As per SUN doc below parameter was obsolete from Solaris 10.
    The following parameters are obsolete.
    ■ shmsys:shminfo_shmmni
    ■ shmsys:shminfo_shmmax
    As per your suggestion, do we need to set below parameters in that case, please clarify.
    Parameter                           Replaced by Resource Control      Recommended Value
    semsys:seminfo_semmni   project.max-sem-ids                      100
    semsys:seminfo_semmsl   process.max-sem-nsems               256
    shmsys:shminfo_shmmax  project.max-shm-memory             4294967295
    shmsys:shminfo_shmmni   project.max-shm-ids                       100
    Also   findings of /etc/release
    more /etc/release
    Solaris 10 10/08 s10s_u6wos_07b SPARC
    Regards,
    Sitarama.

  • Sql Server max server memory

     Hi,
     I know that , max server memory property is about physical memory limit for  buffer pool. But some says that, it is about Vas (virtual adress space) limit for buffer pool. I know that , Vas structure includes
    physical memory and virtual memory. Then it means that max server memory is not equal max
    physical memory for buffer pool.
    Which one is true?

    Both are true depending on the memory model your SQL Server is using
    In conventional memory model. Max server memory is about memory limit for Bpool ( Bpool can be from RAM or Page file), so there is no guarantee that bpool will always be placed in physical memory it can be paged to page file when there is memory pressure.
    In Lock pages and large pages memory model Bpool cant be paged and always placed in RAM so its limit for BPOOL in RAM
    I assume you are referring  Page file as virtual memory and RAM as
    physical memory
    Read http://mssqlwiki.com/2013/03/26/sql-server-lock-pages-in-memory/ and http://mssqlwiki.com/sqlwiki/sql-performance/basics-of-sql-server-memory-architecture/ you will get clarity
    Thank you,
    Karthick P.K |My blogs|My Scribbles|Twitter|My Facebook
    Group|
    www.Mssqlwiki.com
    Please click the Mark as answer button and vote as helpful if this reply solves your problem

  • Determine the current max shared memory size

    Hello,
    How do I determine the max shared memory value in effect on the box I have. I dont wish to trust /etc/system as that value may not be the effective value unless the system has been rebooted after that setting.
    Any pointers would be appreciated.
    Thanks
    Mahesh

    You can get the current values using adb:
    # adb -k
    physmem 3e113
    shminfo$<shminfo
    shminfo:
    shminfo: shmmax shmmin shmmni
    10000000 c8 c8
    shminfo+0x14: shmseg
    c8
    The shminfo structure holds the values being used and the shminfo
    macro will print out the values in hex.
    Alan
    Sun Developer Technical Support
    http://www.sun.com/developers/support

  • MacBook won't start after latest OS update ("Possible memory corruption")

    I am running a late 2009 MacBook, 250GB, 4GB ram. I have just updated the OS (standard, regular update), iTunes and an older iMovie update. I made the necessary restart, and begun using iTunes. After 5 minutes, it crashed, and when turned back on, the startup apple logo screen was filled with command prompt code. The only thing I can do, is to force it to shutdown. At the top of the command prompt text, it now says "possible memory corruption".
    Latest update: I am now met with a black screen every time I turn on - the "sleep" light is on, but that appears to be all that's happening.
    Help!

    Do you have any 3rd party memory modules installed?  What system did you upgrade from? What MacBook module do you have, i.e.
    Check the memory modulds to make sure they are securly seated and try again. If you have 3rd party modules installed remove them and leave just the original Apple modules and try a restart.
    OT

  • Possible Memory Leak, many instances of macromedia.jdbc.oracle.OracleConnection, serviceFactory

    I'm using a plug-in for Eclipse to help identify possible memory leaks however we are having trouble interpreting the results.  The top, and pretty much the only, suspect is this...
    7,874 instances of "macromedia.jdbc.oracle.OracleConnection", loaded by "coldfusion.bootstrap.BootstrapClassLoader @ 0xf935218" occupy 604,781,904 (71.02%) bytes. These instances are referenced from one instance of "java.util.HashMap$Entry[]", loaded by "<system class loader>"
    Any ideas what could cause this many instances?  How do we track this back to a particular cfm or cfc?  Do these number seem high or is that normal?  The system in question normally only has 30-60 concurrent users.
    The one item I'm a little skeptical of is the use of the "coldfusion.server.ServiceFactory" and "coldfusion.sql.QueryTable" objects.  We use them for 1000s of different queries, even queries within large loops.  Could these be the problem?  Each time we need to execute a query we do a createObject for each of these objects, when done we close the connection.  I recently found a similar example where they close the recordSet first and then close the connection, we are not closing the recordSet.  Any help at all is much appreciated.

    It could simply be caused by the obvious: a query or queries making a large number of connections and/or consuming large resources on the Oracle database. Use Eclipse to search your application folder for queries. Can you identify any queries that could be the culprit? Is there a loop containing a method or an instance of a component that queries an Oracle database?
    What's your Coldfusion  version? If you're on CF8 or CF9 then the server monitor (in the Coldfusion Administrator) might tell you the process responsible for the high consumption.

Maybe you are looking for