Polling on a practice for REDO logs

I would like to do a poll on the below security practice because it seemed not commonly deployed. If this practice is done in your company for the core databases, could you please reply to my posting and state your industry, DBA strength and total number of employees? Will collate the results later.
Previous posting 14/4/04:
Can the archive and redo logs be maintained in a way such that no one administrator (sys ad or DBA) will have access to all copies of logs? This is to prevent any disgrunted party to delete the logs and render the point-in-time data recovery impossible. What is the most common way used to achieve this? Thanks for reply:-)
(I am thinking of writing the logs to 2 servers, inwhich one server is fully owned and not accessible to the SysAd)

Are you trying to simultaneously use the Data Guard Broker and issue commands in SQL*Plus?
If so this does not work: Choose one tool and stick with it.

Similar Messages

  • Retention for redo logs

    Hi ,
    Is it possible to set retention for redo logs in 11g R2 and the OS Version will be AIX 6.1?

    936639 wrote:
    Hi ,
    Is it possible to set retention for redo logs in 11g R2 and the OS Version will be AIX 6.1?Redo logs are written to in a cyclical manner; you do not configure retention for them. If you're speaking of archive logs (archived redo logs), then yes, retention policies can be configured for them.
    http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmconfb.htm#i1019318
    -Justin

  • Best practice - online redo logs and virtualization

    I have a 10.1.0.4 instance (soon to be migrated to 11gr2) running under Windows Server 2003.
    We use a non-standard disk distribution scheme -
    on the c: drive we have oracle_home as well as directories for control files and online redo logs.
    on the d: drive we have datafiles
    on the e: drive we have archive log files and another directory with online redo logs and another copy of control file
    my question is this:
    is it smart practice to have ANY online redo logs or control file on the same spindle with archive logs?
    Our setup works fairly well but we are in the process of migrating the instance first to ESX server and SAN and then secondly to 11gtr2 64bit under server 2008 64 and when we bring up our instance on the VM for testing we find that benchmarking the ESX server (dual Xeon 3.4ghz with 48gb RAM running against FalconStor NSS SAN with 15k SAS disks over iSCSI) against the production physical server (dual Xeon 2.0ghz with 4gb RAM using direct attached SATA 7200rpm drives) we find that some processes run faster on the ESX box and some run 40-100% slower. Running Statspack seems to identify lots of physical read waits as well as some waits for redo and controlfiles.
    Is it possible that in addition to any overhead introduced by ESX and iSCSI (we are running Jumbo Frames over 1gb) we may have contention because the archive logs are on the same "spindle" (virtual) as the online redo and control files?
    We're looking at multiple avenues to bring the 2 servers in line from a performance standpoint - db configuration, memory allocation, possible move to 10gb network, possible move to SSD storage tray, possible application rewrites. But from the simplest low hanging fruit idea, if these files should not be on the same spindle thats an easy change to make and possibly eke out an improvement.
    Ideas?
    Mike

    Hi,
    "Old" Oracle standard is to use as many spindles as possible.
    It looks to me, you have only 1 disk with several partitions on it ??
    In my honest opinion you should anyway start by physically seperating OS from Oracle, so let the C: drive to the Windows OS
    Take another physical seperate D: drive to install you application.
    Use yet another set of physical drives, preferably in RAID10 setup, for your database and redo logs
    And finally yet another disk for the archive logs.
    We have recently configured a Windows 2008 server with an 11G Db, which pretty much follows the above setup.
    All non RAID10 disks are RAID1 ( mirror ) and we even have some SSD's for hot tables and redo-logs.
    The machine, or must I say the database, operates like a high speed train, very, very fast.
    Ofcourse keep in mind the number of cores ( not only for licensing ) and the amount of memory.
    Try to prevent the system from swapping, because that is a performance killer!
    Edit: And even if you put a virtual layer in between, try to seperate the virtual disks as much as possible over physical disks
    Success!
    FJFranken
    Edited by: fjfranken on 7-okt-2011 7:19

  • Best practice for Error logging and alert emails

    I have SQL Server 2012 SSIS. I have Excel files that are imported with Exel Source and OLE DB Destination. Scheduled Jobs runs every night SSIS packages.
    I'm looking for advice that what is best practice for production environment.Requirements are followings:
    1) If error occurs with tasks, email is sent to admin
    2) If error occurs with tasks, we have log in flat file or DB
    Kenny_I

    Are you asking about difference b/w using standard logging and event handlers? I prefer latter as using standard logging will not always data in the way in which we desire. So we've developed a framework to add necessary functionalities inside event handlers
    and log required data in the required format to a set of tables that we maintain internally.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Third party tools for redo log

    Dear,
    Any third party tools can read redo log for oracle9i?
    ManyThanks

    Most 3rd parties gave up when they realized that
    a) Oracle now owns, and includes for free, the logminer, and
    b) Oracle is free - and willing to - change the contents of the log files at any time, including patch releases.

  • Following omf for redo log files

    Hi,
    when DB_CREATE_ONLINE_LOG_DEST_1 IS set then I would like to
    know whether 2 log groups would be created at the location specified if yes then in archive log mode whether this would create any problems .

    Before DBWn can write a modified buffer, all redo records associated with the changes to the buffer must be written to disk (the write-ahead protocol). If DBWn finds that some redo records have not been written, it signals LGWR to write the redo records to disk and waits for LGWR to complete writing the redo log buffer before it can write out the data buffers.
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14220/process.htm#i21918

  • Recommended os block size for redo log

    Hi
    Platform AIX
    Oracle 10.2.0.4
    Is there any recommended filesystem blocksize where redo log should be placed?
    We have tested with 512 bytes and 4096 bytes. We got better performance on 512 bytes in terms of avg wait on log file sync.
    Is there oracle/Aix recommendation on the same?

    978881 wrote:
    Hi
    Platform AIX
    Oracle 10.2.0.4
    Is there any recommended filesystem blocksize where redo log should be placed?
    We have tested with 512 bytes and 4096 bytes. We got better performance on 512 bytes in terms of avg wait on log file sync.
    Is there oracle/Aix recommendation on the same?The recommendation is to create redo logs on a mount with agblk=512bytes. Please refer following link(page 60) which is a oracle+IBM technical brief:
    http://www-03.ibm.com/support/techdocs/atsmastr.nsf/5cb5ed706d254a8186256c71006d2e0a/bae31a51a00fa0018625721f00268dc4/$FILE/Oracle%20Architecture%20and%20Tuning%20on%20AIX%20(v%202.30).pdf
    Regards,
    S.K.

  • Best practice for CM log and Out

    Hi,
    I've following architecture:
    DB SERVER = DBSERVER1
    APPS SERVER =APP1 AND APP2
    LB= Netscaler
    PCP configured.
    What is the best practice to have CM log and out? Do I need to keep these files on DB server and mount to APP1 and APP2?
    Please advice.
    Thanks

    Hi,
    see if you want to save the logfiles of other cm node when crash happens then why u want to share APPLCSF?if the node which is hosting shared APPLCSF gets crashed then ALL the logfiles(of both nodes) are gone, keep same directories and path of APPLCSF on both the nodes,so the CMNODE(A) have logfiles local in its directories and CMNODE(B) have logfile local in its directories....
    In the last what ever  i said above was my thinking so follow it or not its u r wish,But always follow what oracle says...the poster should also check with oracle.
    Regards
    Edited by: hungry_dba on Jan 21, 2010 1:20 PM

  • Redo log best practice for performace - ASM 11g

    Hi All,
    I am new to ASM world....can you please give your valuable suggestion on below topic.
    What is the best practice for online redo log files ? I read it somewhere that oracle recommand to have only two disk group one for the all DB fils, control files online redo log files and other disk group for recovery files, like mulitplex redo file, archive log files etc.
    Will there be any performance improvment to make different diskgroup for online redo log (separate from datafiles, contorl files etc) ?
    I am looking for oracle document to find the best practise for redo log (peformance) on ASM.
    Please share your valuable views on this.
    Regards.

    ASM is only a filesystem.
    What really count on performance about I/O is the Storage Design i.e (RAID used, Array Sharing, Hard Disk RPM value, and so on).
    ASM itself is only a layer to read and write on ASM DISK. What mean if you storage design is ok ... ASM will be ok. (of course there is best pratices on ASM but Storage must be in first place)
    e.g There is any performance improvement make different Diskgroup?
    Depend...and this will lead to another question.
    Is the Luns on same array?
    If yes the performance will end up on same point. No matter if you have one or many diskgroup.
    Coment added:
    Comparing ASM to Filesystem in benchmarks (Doc ID 1153664.1)
    Your main concern should be how many IOPs,latency and throughput the storage can give you. Based on these values you will know if your redologs will be fine.

  • Improving redo log writer performance

    I have a database on RAC (2 nodes)
    Oracle 10g
    Linux 3
    2 servers PowerEdge 2850
    I'm tuning my database with "spotilght". I have alredy this alert
    "The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
    The serveres are not in RAID5.
    How can I improve redo log writer performance?
    Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
    Therefore, redo log devices should be placed on fast devices.
    Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
    To reduce redo write time see Improving redo log writer performance.
    See Also:
    Tuning Contention - Redo Log Files
    Tuning Disk I/O - Archive Writer

    Some comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
    > Assuming that you are not CPU constrained,
    moving the online redo to
    high-speed solid-state disk can make a hugedifference.
    Do you honestly think this is practical and usable
    advice Don? There is HUGE price difference between
    SSD and and normal hard disks. Never mind the
    following disadvantages. Quoting
    (http://en.wikipedia.org/wiki/Solid_state_disk):[
    i]
    # Price - As of early 2007, flash memory prices are
    still considerably higher  
    per gigabyte than those of comparable conventional
    hard drives - around $10
    per GB compared to about $0.25 for mechanical
    drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
    Capacity - The capacity of SSDs tends to be
    significantly smaller than the
    capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
    Lower recoverability - After mechanical failure the
    data is completely lost as
    the cell is destroyed, while if normal HDD suffers
    mechanical failure the data
    is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
    Vulnerability against certain types of effects,
    including abrupt power loss
    (especially DRAM based SSDs), magnetic fields and
    electric/static charges
    compared to normal HDDs (which store the data inside
    a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
    Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
    Limited write cycles. Typical Flash storage will
    typically wear out after
    100,000-300,000 write cycles, while high endurance
    Flash storage is often
    marketed with endurance of 1-5 million write cycles
    (many log files, file
    allocation tables, and other commonly used parts of
    the file system exceed
    this over the lifetime of a computer). Special file
    systems or firmware
    designs can mitigate this problem by spreading
    writes over the entire device,
    rather than rewriting files in place.
    Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
    >
    Looking at many of your postings to Oracle Forums
    thus far Don, it seems to me that you are less
    interested in providing actual practical help, and
    more interested in self-promotion - of your company
    and the Oracle books produced by it.
    .. and that is not a very nice approach when people
    post real problems wanting real world practical
    advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us.

  • Best Practice for configuring ZS3 storage

    Hi Team
    Can someone help me with the best way to configure data profile for a ZS3-2 storage which is mainly going to be used for oracle database and EBU application.
    If we are planning to install ldoms os on storage lun then what data profile should be the best one.
    Regards
    AB

    Without knowing more I would do the following, being very conservative;
    Create too pools, with pool0 on controller0 and pool1 on controller1. This is to take advantage of both controllers DRAM, for better performance. Regardless of RAIDZ or RAID 1+0, I would do this.
    I am assuming both trays are the same HHDs ( 900G?)
    Make the first pool RAID 1+0, with 2 write SSDs and one READ SSD. I would use this pool for redo logs, and database files. Mounting via OISP or dNFS.
    Make the second pool RAIDZ, with 2 write SSDs and one READ SSD, I would place my RMAN mount here, and any binary mounts and archive logs. Maybe DB files, if using partitioning. HCC is a great technology to use if you application will benefit from it.
    Now for the relaity check!
    9 times out of 10 method...
    Realistic... just use RAIDZ on both pools. In reality RAID 1+0 generates more IOPS in the array and leads to more DB I/O wait time! It's true!
    If you are headed to ECO I'll sit down and explain. short version, RAIDZ does less writes, which means better database performance.
    Erik

  • Disk array configurations with oracle redo logs and flash recovery area.

    Dear Oracle users,
    We are planning to buy the new server for oracle database 10g standard edition. We put oracle database file, redo logs, and flash recovery area on each of disk array. My question is what is the best disk array configuration for redo logs and flash recovery area? RAID 10 or RAID 1? Is that possible we can duplicate Flash recovery area to the other location (such as net work drive) at the same time? Since we only have single disk array controller to connect to the disk arrays, I am try to avoid the single failure that will lose archive logs and daily backup.
    thanks,
    Belinda

    Thank you so much for the suggestion. Could you please let me know the answer for my question of FRA redundancy?
    “Is that possible we can duplicate Flash recovery area to the other location (such as net work drive) at the same time? Since we only have single disk array controller to connect to the disk arrays, I am try to avoid the single failure that will lose archive logs and daily backup.”

  • Private strand flush not complete how to find optimal size of redo log file

    hi,
    i am using oracle 10.2.0 on unix system and getting Private strand flush not complete in the alert log file. i know this is due to check point is not completed.
    I need to increase the size of redo log files or add new group to the database. i have log file switch (checkpoint incomplete) in the top 5 wait event.
    i can't change any parameter of database. i have three redo log group and log files are of 250MB size. i want to know the suitable size to avoid problem.
    select * from v$instance_recovery;
    RECOVERY_ESTIMATED_IOS     ACTUAL_REDO_BLKS     TARGET_REDO_BLKS     LOG_FILE_SIZE_REDO_BLKS     LOG_CHKPT_TIMEOUT_REDO_BLKS     LOG_CHKPT_INTERVAL_REDO_BLKS     FAST_START_IO_TARGET_REDO_BLKS     TARGET_MTTR     ESTIMATED_MTTR     CKPT_BLOCK_WRITES     OPTIMAL_LOGFILE_SIZE     ESTD_CLUSTER_AVAILABLE_TIME     WRITES_MTTR     WRITES_LOGFILE_SIZE     WRITES_LOG_CHECKPOINT_SETTINGS     WRITES_OTHER_SETTINGS     WRITES_AUTOTUNE     WRITES_FULL_THREAD_CKPT
    625     9286     9999     921600          9999          0     9     112166207               0     0     219270206     0     3331591     5707793please suggest me or tell me the way how to find out suitable size to avoid problem.
    thanks
    umesh

    How often should a database archive its logs
    Re: Redo log size increase and performance
    Please read the above thread and great replies by HJR sir. I think if you wish to get concept knowledge, you should add in your notes.
    "If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files. The optimal size can be obtained by querying the OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_RECOVERY view. You can also obtain sizing advice on the Redo Log Groups page of Oracle Enterprise Manager Database Control."
    Source:http://download-west.oracle.com/docs/cd/B13789_01/server.101/b10752/build_db.htm#19559
    Pl also see ML Doc 274264.1 (REDO LOGS SIZING ADVISORY) on tips to calculate the optimal size for redo logs in 10g databases
    Source:Re: Redo Log Size in R12
    HTH
    Girish Sharma

  • Redo log space requests VALUE high

    SELECT name||' = '||value
    FROM v$sysstat
    WHERE name = 'redo log space requests';
    I am noticing 40+ space requests for some of my Oracle 9.2.0.5 databases.
    On another 7.3.4 DB I see this over 140 but this DB shutdown only on weekends so this cumulative value increases I presume.
    I have 20MB of 5 groups already. Do I still add another 2 more groups or increase their sizes ?
    I did read somewhere that I'd have to increase the log_buffer parameter. So how do we deal with this issue ? Any repercussions if I let this as it is for now ?
    The cause of this would be redo logs are not big enough or otherwise ?
    Thanks.

    user4874781 wrote:
    Thanks for your response Charles.
    So if I understand this correctly ... Redolog Space requests corresponds to a either an incorrectly sized redo log file / DBWR / CKPT needs to be tuned.
    Maybe I was interpreting this the wrong way. (Possibly)
    " The statistic 'redo log space requests' reflects the number of times a user process waits for space in the redo log buffer. " If that is the case, if there was longer waits for this event, I was under the assumption that log_buffer needs to be increased.
    http://www.idevelopment.info/data/Oracle/DBA_tips/Tuning/TUNING_6.shtml
    * Yes, the waits have increased to 70 as of now (since 40 yesterday .. DB was started Saturday night and will run till weekend) Less activity as of now, since the day has just started ; so it would definitely rise by end of the day. I took a look at the above article, and I think I understand why the article is slightly confusing. With due respect to the author, the article was last modified 16-Apr-2001, which I believe is before the Oracle documentation was better clarified regarding these statistics. From:
    http://download.oracle.com/docs/cd/B14117_01/server.101/b10755/stats002.htm
    "redo log space requests: Number of times the active log file is full and Oracle must wait for disk space to be allocated for the redo log entries. Such space is created by performing a log switch. Log files that are small in relation to the size of the SGA or the commit rate of the work load can cause problems. When the log switch occurs, Oracle must ensure that all committed dirty buffers are written to disk before switching to a new log file. If you have a large SGA full of dirty buffers and small redo log files, a log switch must wait for DBWR to write dirty buffers to disk before continuing."
    "redo log space wait time: Total elapsed waiting time for "redo log space requests" in 10s of milliseconds"
    It is quite possible that the "redo log space requests" will increase with every redo log file switch, which should not be too much of a concern. You may want to focus a little more on the "redo log space wait time" statistic, which indicates how much wait time was involved waiting. You might also want to focus on the system-wide wait event interface, examining how the accumulated wait time increases from one sampling of each of the statistics to the next.
    * I have 1 log switch every 11 minutes ; BTW ; I have 5 log groups of 20 MB each as of now. So I am assuming 40 MB of 4 or 5 log groups should be fine as per your suggestion ?If you have the disk space, considering that it is an ancient AIX box, you may want to set the redo log files to an even larger size, possibly 100MB (or larger). You may then want to force periodic switches of the redo log, for instance once an hour, or once every 30 minutes.
    * This is an ancient AIX box with 512 MB Ram. Is the redo log located on a fast device ? I'd have to find that out ( any hints on that ? )
    * Decreasing the log_buffer is possible on weekends since I'd have to bounce it for it to take effect.
    I will increase the log files accordingly and hopefully the space waits will reduce. Thanks again.Someone else, possibly Sybrand, on the forums might be familiar with AIX and be able to provide you with an answer. If you are seeing system-wide increasing wait time for redo related waits, then that might be an indication that the redo logs are not located on a fast device.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Redo Log times ?

    During peak times my redo log times go sky high.. like 104.65ms .. How can I tune this to be lower ?

    The "sync" is the time it takes for a process to send a message to the log writer, the log writer to notice the message and write any pending log buffer to disc, and then send a message back to the initiating process.
    When the return message is sent, the process may be at the bottom of a long run queue, and take some time to get to the top of the queue and notice the message. This is most likely to happen when your system gets busy - especially with a large number of concurrent processes that are enthusiastically consuming the CPU.
    An increase in the "sync" time may be about CPU and process counts and have nothing to do with the ability of the log writer to do its job.
    Unfortunately, the log writer write times aren't captured properly in 9i, so it's hard to tell if the log writer really is starting to collapse under pressure; but if you've got sync times of 1/10 second a typical job isn't going to see it as much of a threat unless it's a pseudo-batch job that is inserting and committing 1,000,000 rows one at a time. So you have to ask if there are any jobs which are visibly much slower under load, rather than depending on the totalled log file sync time as an indicator - a check of v$session_event may give you a view on whether any session is suffering particularly badly.
    You might want to read this from Kevin Closson:
    http://kevinclosson.wordpress.com/2007/07/21/manly-men-only-use-solid-state-disk-for-redo-logging-lgwr-io-is-simple-but-not-lgwr-processing/
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

Maybe you are looking for

  • Customer Sales value not decrease (FD33) after billing (VF01)

    Hi, When Billing a delivery, in FD33 the receivable are correctly increased but sales value still unchanged normaly it should be decreased with the billed amount. Thanks for your help.

  • Please advise on how I can find a source location using FM in APO-SNP

    Hi All, I'm able to get the list of orders using FM - BAPI_POSRVAPS_GETLIST3 but not able to find the source for the orders extracted using any function modules / BAPIs Please advise. Thanks Venkat

  • Mapping flat file and table to a target table

    Hi all I need to join a db table and a flat file as source and output (target) it to a db table. I know we can do it through external tables, however at this juncture we have to do it through flat files only. Thanks PC

  • Slow performance in web when modal property set to yes for a popup canvas

    Hi , I found that when the form is upgraded from 6i to 10g, it shown a significant delay to exit a popup canvas if the property Modal is set to Yes but if set to No, the performance is similar in both c/s and web. However, our application need to set

  • Expression Report file pathname

    Bonjour à tous, je suis nouveau dans la communauté, je travaille en ce moment à la réalisation de programme sous TestStand et LABVIEW. Je suis actuellement dans la modification du process modèle afin d'y faire apparaître un numéro d'OF qui sera génér